00:00:00.000 Started by upstream project "autotest-per-patch" build number 132786 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.219 Using shallow fetch with depth 1 00:00:00.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.219 > git --version # timeout=10 00:00:00.251 > git --version # 'git version 2.39.2' 00:00:00.251 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.769 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.792 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.805 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.805 > git config core.sparsecheckout # timeout=10 00:00:06.816 > git read-tree -mu HEAD # timeout=10 00:00:06.833 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.858 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.858 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.003 [Pipeline] Start of Pipeline 00:00:07.017 [Pipeline] library 00:00:07.019 Loading library shm_lib@master 00:00:07.019 Library shm_lib@master is cached. Copying from home. 00:00:07.034 [Pipeline] node 00:00:07.043 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.044 [Pipeline] { 00:00:07.052 [Pipeline] catchError 00:00:07.052 [Pipeline] { 00:00:07.063 [Pipeline] wrap 00:00:07.070 [Pipeline] { 00:00:07.078 [Pipeline] stage 00:00:07.079 [Pipeline] { (Prologue) 00:00:07.454 [Pipeline] sh 00:00:07.751 + logger -p user.info -t JENKINS-CI 00:00:07.768 [Pipeline] echo 00:00:07.769 Node: CYP12 00:00:07.775 [Pipeline] sh 00:00:08.121 [Pipeline] setCustomBuildProperty 00:00:08.130 [Pipeline] echo 00:00:08.131 Cleanup processes 00:00:08.135 [Pipeline] sh 00:00:08.423 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.423 3193148 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.438 [Pipeline] sh 00:00:08.730 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.730 ++ grep -v 'sudo pgrep' 00:00:08.730 ++ awk '{print $1}' 00:00:08.730 + sudo kill -9 00:00:08.730 + true 00:00:08.747 [Pipeline] cleanWs 00:00:08.757 [WS-CLEANUP] Deleting project workspace... 00:00:08.757 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.764 [WS-CLEANUP] done 00:00:08.768 [Pipeline] setCustomBuildProperty 00:00:08.779 [Pipeline] sh 00:00:09.068 + sudo git config --global --replace-all safe.directory '*' 00:00:09.188 [Pipeline] httpRequest 00:00:10.720 [Pipeline] echo 00:00:10.722 Sorcerer 10.211.164.112 is alive 00:00:10.733 [Pipeline] retry 00:00:10.735 [Pipeline] { 00:00:10.750 [Pipeline] httpRequest 00:00:10.755 HttpMethod: GET 00:00:10.756 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.757 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.778 Response Code: HTTP/1.1 200 OK 00:00:10.778 Success: Status code 200 is in the accepted range: 200,404 00:00:10.779 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.694 [Pipeline] } 00:00:14.711 [Pipeline] // retry 00:00:14.718 [Pipeline] sh 00:00:15.011 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.029 [Pipeline] httpRequest 00:00:15.461 [Pipeline] echo 00:00:15.463 Sorcerer 10.211.164.112 is alive 00:00:15.473 [Pipeline] retry 00:00:15.475 [Pipeline] { 00:00:15.489 [Pipeline] httpRequest 00:00:15.494 HttpMethod: GET 00:00:15.495 URL: http://10.211.164.112/packages/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz 00:00:15.496 Sending request to url: http://10.211.164.112/packages/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz 00:00:15.522 Response Code: HTTP/1.1 200 OK 00:00:15.522 Success: Status code 200 is in the accepted range: 200,404 00:00:15.523 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz 00:01:16.221 [Pipeline] } 00:01:16.239 [Pipeline] // retry 00:01:16.246 [Pipeline] sh 00:01:16.538 + tar --no-same-owner -xf spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz 00:01:19.856 [Pipeline] sh 00:01:20.145 + git -C spdk log --oneline -n5 00:01:20.145 51286f61a bdev: simplify bdev_reset_freeze_channel 00:01:20.145 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:20.145 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:20.145 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:20.145 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:20.158 [Pipeline] } 00:01:20.171 [Pipeline] // stage 00:01:20.179 [Pipeline] stage 00:01:20.181 [Pipeline] { (Prepare) 00:01:20.195 [Pipeline] writeFile 00:01:20.210 [Pipeline] sh 00:01:20.597 + logger -p user.info -t JENKINS-CI 00:01:20.611 [Pipeline] sh 00:01:20.902 + logger -p user.info -t JENKINS-CI 00:01:20.915 [Pipeline] sh 00:01:21.206 + cat autorun-spdk.conf 00:01:21.206 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.206 SPDK_TEST_NVMF=1 00:01:21.206 SPDK_TEST_NVME_CLI=1 00:01:21.206 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.206 SPDK_TEST_NVMF_NICS=e810 00:01:21.206 SPDK_TEST_VFIOUSER=1 00:01:21.206 SPDK_RUN_UBSAN=1 00:01:21.206 NET_TYPE=phy 00:01:21.216 RUN_NIGHTLY=0 00:01:21.220 [Pipeline] readFile 00:01:21.249 [Pipeline] withEnv 00:01:21.251 [Pipeline] { 00:01:21.264 [Pipeline] sh 00:01:21.554 + set -ex 00:01:21.554 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:21.554 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.554 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.554 ++ SPDK_TEST_NVMF=1 00:01:21.554 ++ SPDK_TEST_NVME_CLI=1 00:01:21.554 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.554 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.554 ++ SPDK_TEST_VFIOUSER=1 00:01:21.554 ++ SPDK_RUN_UBSAN=1 00:01:21.554 ++ NET_TYPE=phy 00:01:21.554 ++ RUN_NIGHTLY=0 00:01:21.554 + case $SPDK_TEST_NVMF_NICS in 00:01:21.554 + DRIVERS=ice 00:01:21.554 + [[ tcp == \r\d\m\a ]] 00:01:21.554 + [[ -n ice ]] 00:01:21.554 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.554 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:31.561 rmmod: ERROR: Module irdma is not currently loaded 00:01:31.561 rmmod: ERROR: Module i40iw is not currently loaded 00:01:31.561 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:31.561 + true 00:01:31.561 + for D in $DRIVERS 00:01:31.561 + sudo modprobe ice 00:01:31.561 + exit 0 00:01:31.572 [Pipeline] } 00:01:31.587 [Pipeline] // withEnv 00:01:31.592 [Pipeline] } 00:01:31.606 [Pipeline] // stage 00:01:31.616 [Pipeline] catchError 00:01:31.617 [Pipeline] { 00:01:31.631 [Pipeline] timeout 00:01:31.632 Timeout set to expire in 1 hr 0 min 00:01:31.634 [Pipeline] { 00:01:31.648 [Pipeline] stage 00:01:31.650 [Pipeline] { (Tests) 00:01:31.664 [Pipeline] sh 00:01:31.958 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.958 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.958 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.958 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:31.958 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.958 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.958 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:31.958 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.958 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:31.958 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:31.958 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:31.958 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:31.958 + source /etc/os-release 00:01:31.958 ++ NAME='Fedora Linux' 00:01:31.958 ++ VERSION='39 (Cloud Edition)' 00:01:31.958 ++ ID=fedora 00:01:31.958 ++ VERSION_ID=39 00:01:31.958 ++ VERSION_CODENAME= 00:01:31.958 ++ PLATFORM_ID=platform:f39 00:01:31.958 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:31.958 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.958 ++ LOGO=fedora-logo-icon 00:01:31.958 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:31.958 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.958 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:31.958 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.958 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.958 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.958 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:31.958 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.958 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:31.958 ++ SUPPORT_END=2024-11-12 00:01:31.958 ++ VARIANT='Cloud Edition' 00:01:31.958 ++ VARIANT_ID=cloud 00:01:31.958 + uname -a 00:01:31.958 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:31.958 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:35.263 Hugepages 00:01:35.263 node hugesize free / total 00:01:35.263 node0 1048576kB 0 / 0 00:01:35.263 node0 2048kB 0 / 0 00:01:35.263 node1 1048576kB 0 / 0 00:01:35.263 node1 2048kB 0 / 0 00:01:35.263 00:01:35.263 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.263 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:35.263 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:35.263 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:35.263 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:35.263 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:35.263 + rm -f /tmp/spdk-ld-path 00:01:35.263 + source autorun-spdk.conf 00:01:35.263 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.263 ++ SPDK_TEST_NVMF=1 00:01:35.263 ++ SPDK_TEST_NVME_CLI=1 00:01:35.263 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.263 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.263 ++ SPDK_TEST_VFIOUSER=1 00:01:35.263 ++ SPDK_RUN_UBSAN=1 00:01:35.263 ++ NET_TYPE=phy 00:01:35.263 ++ RUN_NIGHTLY=0 00:01:35.263 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.263 + [[ -n '' ]] 00:01:35.263 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.263 + for M in /var/spdk/build-*-manifest.txt 00:01:35.263 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:35.263 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.263 + for M in /var/spdk/build-*-manifest.txt 00:01:35.263 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:35.263 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.263 + for M in /var/spdk/build-*-manifest.txt 00:01:35.263 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:35.263 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.263 ++ uname 00:01:35.263 + [[ Linux == \L\i\n\u\x ]] 00:01:35.263 + sudo dmesg -T 00:01:35.263 + sudo dmesg --clear 00:01:35.263 + dmesg_pid=3194723 00:01:35.263 + [[ Fedora Linux == FreeBSD ]] 00:01:35.263 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.263 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.263 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.263 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.263 + export FIO_BIN=/usr/src/fio-static/fio 00:01:35.263 + FIO_BIN=/usr/src/fio-static/fio 00:01:35.263 + sudo dmesg -Tw 00:01:35.264 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.264 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.264 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.264 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.264 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.264 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.264 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.264 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.264 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.264 11:16:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:35.264 11:16:27 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:35.264 11:16:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:35.264 11:16:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:35.264 11:16:27 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.264 11:16:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:35.264 11:16:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:35.264 11:16:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:35.264 11:16:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.264 11:16:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.264 11:16:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.264 11:16:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.264 11:16:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.264 11:16:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.264 11:16:27 -- paths/export.sh@5 -- $ export PATH 00:01:35.264 11:16:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.264 11:16:27 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:35.526 11:16:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:35.526 11:16:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733739387.XXXXXX 00:01:35.526 11:16:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733739387.nn5ebD 00:01:35.526 11:16:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:35.526 11:16:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:35.526 11:16:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:35.526 11:16:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:35.526 11:16:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.526 11:16:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:35.526 11:16:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:35.526 11:16:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.526 11:16:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:35.526 11:16:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:35.526 11:16:27 -- pm/common@17 -- $ local monitor 00:01:35.526 11:16:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.526 11:16:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.526 11:16:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.526 11:16:27 -- pm/common@21 -- $ date +%s 00:01:35.526 11:16:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.526 11:16:27 -- pm/common@21 -- $ date +%s 00:01:35.526 11:16:27 -- pm/common@25 -- $ sleep 1 00:01:35.526 11:16:27 -- pm/common@21 -- $ date +%s 00:01:35.526 11:16:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733739387 00:01:35.526 11:16:27 -- pm/common@21 -- $ date +%s 00:01:35.526 11:16:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733739387 00:01:35.526 11:16:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733739387 00:01:35.526 11:16:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733739387 00:01:35.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733739387_collect-cpu-load.pm.log 00:01:35.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733739387_collect-vmstat.pm.log 00:01:35.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733739387_collect-cpu-temp.pm.log 00:01:35.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733739387_collect-bmc-pm.bmc.pm.log 00:01:36.469 11:16:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:36.469 11:16:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.469 11:16:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.469 11:16:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.469 11:16:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.469 Mon Dec 9 10:16:28 AM UTC 2024 00:01:36.469 11:16:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.469 v25.01-pre-312-g51286f61a 00:01:36.469 11:16:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.469 11:16:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.469 11:16:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.469 11:16:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:36.469 11:16:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:36.469 11:16:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.469 ************************************ 00:01:36.469 START TEST ubsan 00:01:36.469 ************************************ 00:01:36.469 11:16:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:36.469 using ubsan 00:01:36.469 00:01:36.469 real 0m0.001s 00:01:36.469 user 0m0.001s 00:01:36.469 sys 0m0.000s 00:01:36.469 11:16:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:36.469 11:16:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.469 ************************************ 00:01:36.469 END TEST ubsan 00:01:36.469 ************************************ 00:01:36.469 11:16:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:36.469 11:16:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.469 11:16:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.469 11:16:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.469 11:16:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.469 11:16:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.469 11:16:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.470 11:16:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:36.470 11:16:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:36.730 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:36.730 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.991 Using 'verbs' RDMA provider 00:01:52.881 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:05.131 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:05.131 Creating mk/config.mk...done. 00:02:05.131 Creating mk/cc.flags.mk...done. 00:02:05.131 Type 'make' to build. 00:02:05.131 11:16:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:05.131 11:16:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:05.131 11:16:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.131 11:16:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.131 ************************************ 00:02:05.131 START TEST make 00:02:05.131 ************************************ 00:02:05.131 11:16:56 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:05.392 make[1]: Nothing to be done for 'all'. 00:02:06.794 The Meson build system 00:02:06.794 Version: 1.5.0 00:02:06.794 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:06.794 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.794 Build type: native build 00:02:06.794 Project name: libvfio-user 00:02:06.794 Project version: 0.0.1 00:02:06.794 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.794 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.794 Host machine cpu family: x86_64 00:02:06.794 Host machine cpu: x86_64 00:02:06.794 Run-time dependency threads found: YES 00:02:06.794 Library dl found: YES 00:02:06.794 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.794 Run-time dependency json-c found: YES 0.17 00:02:06.794 Run-time dependency cmocka found: YES 1.1.7 00:02:06.794 Program pytest-3 found: NO 00:02:06.794 Program flake8 found: NO 00:02:06.794 Program misspell-fixer found: NO 00:02:06.794 Program restructuredtext-lint found: NO 00:02:06.794 Program valgrind found: YES (/usr/bin/valgrind) 00:02:06.794 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.794 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.794 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.794 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.794 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:06.794 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:06.794 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.794 Build targets in project: 8 00:02:06.794 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:06.794 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:06.794 00:02:06.794 libvfio-user 0.0.1 00:02:06.794 00:02:06.794 User defined options 00:02:06.794 buildtype : debug 00:02:06.794 default_library: shared 00:02:06.794 libdir : /usr/local/lib 00:02:06.794 00:02:06.794 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.057 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.057 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:07.057 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:07.057 [3/37] Compiling C object samples/null.p/null.c.o 00:02:07.057 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:07.057 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:07.057 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:07.057 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:07.057 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:07.057 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:07.057 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:07.057 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:07.057 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:07.057 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:07.057 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:07.057 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:07.057 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:07.057 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:07.057 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:07.057 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:07.057 [20/37] Compiling C object samples/server.p/server.c.o 00:02:07.057 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:07.057 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:07.057 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:07.057 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:07.058 [25/37] Compiling C object samples/client.p/client.c.o 00:02:07.058 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:07.321 [27/37] Linking target samples/client 00:02:07.321 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:07.321 [29/37] Linking target test/unit_tests 00:02:07.321 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:07.321 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:07.585 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:07.585 [33/37] Linking target samples/server 00:02:07.585 [34/37] Linking target samples/lspci 00:02:07.585 [35/37] Linking target samples/null 00:02:07.585 [36/37] Linking target samples/gpio-pci-idio-16 00:02:07.585 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:07.585 INFO: autodetecting backend as ninja 00:02:07.585 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.585 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.849 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.849 ninja: no work to do. 00:02:14.457 The Meson build system 00:02:14.457 Version: 1.5.0 00:02:14.457 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:14.457 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:14.457 Build type: native build 00:02:14.457 Program cat found: YES (/usr/bin/cat) 00:02:14.457 Project name: DPDK 00:02:14.457 Project version: 24.03.0 00:02:14.457 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.457 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.457 Host machine cpu family: x86_64 00:02:14.457 Host machine cpu: x86_64 00:02:14.457 Message: ## Building in Developer Mode ## 00:02:14.457 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.457 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.457 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.457 Program python3 found: YES (/usr/bin/python3) 00:02:14.457 Program cat found: YES (/usr/bin/cat) 00:02:14.457 Compiler for C supports arguments -march=native: YES 00:02:14.457 Checking for size of "void *" : 8 00:02:14.457 Checking for size of "void *" : 8 (cached) 00:02:14.457 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.457 Library m found: YES 00:02:14.457 Library numa found: YES 00:02:14.457 Has header "numaif.h" : YES 00:02:14.457 Library fdt found: NO 00:02:14.457 Library execinfo found: NO 00:02:14.457 Has header "execinfo.h" : YES 00:02:14.457 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.457 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.457 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.457 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.457 Run-time dependency openssl found: YES 3.1.1 00:02:14.457 Run-time dependency libpcap found: YES 1.10.4 00:02:14.457 Has header "pcap.h" with dependency libpcap: YES 00:02:14.457 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.457 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.457 Compiler for C supports arguments -Wformat: YES 00:02:14.457 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.457 Compiler for C supports arguments -Wformat-security: NO 00:02:14.457 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.457 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.457 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.457 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.457 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.458 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.458 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.458 Compiler for C supports arguments -Wundef: YES 00:02:14.458 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.458 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.458 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.458 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.458 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.458 Program objdump found: YES (/usr/bin/objdump) 00:02:14.458 Compiler for C supports arguments -mavx512f: YES 00:02:14.458 Checking if "AVX512 checking" compiles: YES 00:02:14.458 Fetching value of define "__SSE4_2__" : 1 00:02:14.458 Fetching value of define "__AES__" : 1 00:02:14.458 Fetching value of define "__AVX__" : 1 00:02:14.458 Fetching value of define "__AVX2__" : 1 00:02:14.458 Fetching value of define "__AVX512BW__" : 1 00:02:14.458 Fetching value of define "__AVX512CD__" : 1 00:02:14.458 Fetching value of define "__AVX512DQ__" : 1 00:02:14.458 Fetching value of define "__AVX512F__" : 1 00:02:14.458 Fetching value of define "__AVX512VL__" : 1 00:02:14.458 Fetching value of define "__PCLMUL__" : 1 00:02:14.458 Fetching value of define "__RDRND__" : 1 00:02:14.458 Fetching value of define "__RDSEED__" : 1 00:02:14.458 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:14.458 Fetching value of define "__znver1__" : (undefined) 00:02:14.458 Fetching value of define "__znver2__" : (undefined) 00:02:14.458 Fetching value of define "__znver3__" : (undefined) 00:02:14.458 Fetching value of define "__znver4__" : (undefined) 00:02:14.458 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.458 Message: lib/log: Defining dependency "log" 00:02:14.458 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.458 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.458 Checking for function "getentropy" : NO 00:02:14.458 Message: lib/eal: Defining dependency "eal" 00:02:14.458 Message: lib/ring: Defining dependency "ring" 00:02:14.458 Message: lib/rcu: Defining dependency "rcu" 00:02:14.458 Message: lib/mempool: Defining dependency "mempool" 00:02:14.458 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.458 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.458 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.458 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.458 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.458 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.458 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:14.458 Compiler for C supports arguments -mpclmul: YES 00:02:14.458 Compiler for C supports arguments -maes: YES 00:02:14.458 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.458 Compiler for C supports arguments -mavx512bw: YES 00:02:14.458 Compiler for C supports arguments -mavx512dq: YES 00:02:14.458 Compiler for C supports arguments -mavx512vl: YES 00:02:14.458 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.458 Compiler for C supports arguments -mavx2: YES 00:02:14.458 Compiler for C supports arguments -mavx: YES 00:02:14.458 Message: lib/net: Defining dependency "net" 00:02:14.458 Message: lib/meter: Defining dependency "meter" 00:02:14.458 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.458 Message: lib/pci: Defining dependency "pci" 00:02:14.458 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.458 Message: lib/hash: Defining dependency "hash" 00:02:14.458 Message: lib/timer: Defining dependency "timer" 00:02:14.458 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.458 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.458 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.458 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.458 Message: lib/power: Defining dependency "power" 00:02:14.458 Message: lib/reorder: Defining dependency "reorder" 00:02:14.458 Message: lib/security: Defining dependency "security" 00:02:14.458 Has header "linux/userfaultfd.h" : YES 00:02:14.458 Has header "linux/vduse.h" : YES 00:02:14.458 Message: lib/vhost: Defining dependency "vhost" 00:02:14.458 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.458 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.458 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.458 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.458 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.458 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.458 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.458 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.458 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.458 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.458 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.458 Configuring doxy-api-html.conf using configuration 00:02:14.458 Configuring doxy-api-man.conf using configuration 00:02:14.458 Program mandb found: YES (/usr/bin/mandb) 00:02:14.458 Program sphinx-build found: NO 00:02:14.458 Configuring rte_build_config.h using configuration 00:02:14.458 Message: 00:02:14.458 ================= 00:02:14.458 Applications Enabled 00:02:14.458 ================= 00:02:14.458 00:02:14.458 apps: 00:02:14.458 00:02:14.458 00:02:14.458 Message: 00:02:14.458 ================= 00:02:14.458 Libraries Enabled 00:02:14.458 ================= 00:02:14.458 00:02:14.458 libs: 00:02:14.458 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.458 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.458 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.458 00:02:14.458 Message: 00:02:14.458 =============== 00:02:14.458 Drivers Enabled 00:02:14.458 =============== 00:02:14.458 00:02:14.458 common: 00:02:14.458 00:02:14.458 bus: 00:02:14.458 pci, vdev, 00:02:14.458 mempool: 00:02:14.458 ring, 00:02:14.458 dma: 00:02:14.458 00:02:14.458 net: 00:02:14.458 00:02:14.458 crypto: 00:02:14.458 00:02:14.458 compress: 00:02:14.458 00:02:14.458 vdpa: 00:02:14.458 00:02:14.458 00:02:14.458 Message: 00:02:14.458 ================= 00:02:14.458 Content Skipped 00:02:14.458 ================= 00:02:14.458 00:02:14.458 apps: 00:02:14.458 dumpcap: explicitly disabled via build config 00:02:14.458 graph: explicitly disabled via build config 00:02:14.458 pdump: explicitly disabled via build config 00:02:14.458 proc-info: explicitly disabled via build config 00:02:14.458 test-acl: explicitly disabled via build config 00:02:14.458 test-bbdev: explicitly disabled via build config 00:02:14.458 test-cmdline: explicitly disabled via build config 00:02:14.458 test-compress-perf: explicitly disabled via build config 00:02:14.458 test-crypto-perf: explicitly disabled via build config 00:02:14.458 test-dma-perf: explicitly disabled via build config 00:02:14.458 test-eventdev: explicitly disabled via build config 00:02:14.458 test-fib: explicitly disabled via build config 00:02:14.458 test-flow-perf: explicitly disabled via build config 00:02:14.458 test-gpudev: explicitly disabled via build config 00:02:14.458 test-mldev: explicitly disabled via build config 00:02:14.458 test-pipeline: explicitly disabled via build config 00:02:14.458 test-pmd: explicitly disabled via build config 00:02:14.458 test-regex: explicitly disabled via build config 00:02:14.458 test-sad: explicitly disabled via build config 00:02:14.458 test-security-perf: explicitly disabled via build config 00:02:14.458 00:02:14.458 libs: 00:02:14.458 argparse: explicitly disabled via build config 00:02:14.458 metrics: explicitly disabled via build config 00:02:14.458 acl: explicitly disabled via build config 00:02:14.458 bbdev: explicitly disabled via build config 00:02:14.458 bitratestats: explicitly disabled via build config 00:02:14.458 bpf: explicitly disabled via build config 00:02:14.458 cfgfile: explicitly disabled via build config 00:02:14.458 distributor: explicitly disabled via build config 00:02:14.458 efd: explicitly disabled via build config 00:02:14.458 eventdev: explicitly disabled via build config 00:02:14.458 dispatcher: explicitly disabled via build config 00:02:14.458 gpudev: explicitly disabled via build config 00:02:14.458 gro: explicitly disabled via build config 00:02:14.458 gso: explicitly disabled via build config 00:02:14.458 ip_frag: explicitly disabled via build config 00:02:14.458 jobstats: explicitly disabled via build config 00:02:14.458 latencystats: explicitly disabled via build config 00:02:14.458 lpm: explicitly disabled via build config 00:02:14.458 member: explicitly disabled via build config 00:02:14.458 pcapng: explicitly disabled via build config 00:02:14.458 rawdev: explicitly disabled via build config 00:02:14.458 regexdev: explicitly disabled via build config 00:02:14.458 mldev: explicitly disabled via build config 00:02:14.458 rib: explicitly disabled via build config 00:02:14.458 sched: explicitly disabled via build config 00:02:14.458 stack: explicitly disabled via build config 00:02:14.458 ipsec: explicitly disabled via build config 00:02:14.458 pdcp: explicitly disabled via build config 00:02:14.458 fib: explicitly disabled via build config 00:02:14.458 port: explicitly disabled via build config 00:02:14.458 pdump: explicitly disabled via build config 00:02:14.458 table: explicitly disabled via build config 00:02:14.458 pipeline: explicitly disabled via build config 00:02:14.458 graph: explicitly disabled via build config 00:02:14.458 node: explicitly disabled via build config 00:02:14.458 00:02:14.458 drivers: 00:02:14.458 common/cpt: not in enabled drivers build config 00:02:14.458 common/dpaax: not in enabled drivers build config 00:02:14.458 common/iavf: not in enabled drivers build config 00:02:14.458 common/idpf: not in enabled drivers build config 00:02:14.458 common/ionic: not in enabled drivers build config 00:02:14.458 common/mvep: not in enabled drivers build config 00:02:14.458 common/octeontx: not in enabled drivers build config 00:02:14.458 bus/auxiliary: not in enabled drivers build config 00:02:14.458 bus/cdx: not in enabled drivers build config 00:02:14.458 bus/dpaa: not in enabled drivers build config 00:02:14.458 bus/fslmc: not in enabled drivers build config 00:02:14.458 bus/ifpga: not in enabled drivers build config 00:02:14.458 bus/platform: not in enabled drivers build config 00:02:14.458 bus/uacce: not in enabled drivers build config 00:02:14.458 bus/vmbus: not in enabled drivers build config 00:02:14.458 common/cnxk: not in enabled drivers build config 00:02:14.458 common/mlx5: not in enabled drivers build config 00:02:14.458 common/nfp: not in enabled drivers build config 00:02:14.458 common/nitrox: not in enabled drivers build config 00:02:14.458 common/qat: not in enabled drivers build config 00:02:14.458 common/sfc_efx: not in enabled drivers build config 00:02:14.458 mempool/bucket: not in enabled drivers build config 00:02:14.458 mempool/cnxk: not in enabled drivers build config 00:02:14.458 mempool/dpaa: not in enabled drivers build config 00:02:14.458 mempool/dpaa2: not in enabled drivers build config 00:02:14.458 mempool/octeontx: not in enabled drivers build config 00:02:14.458 mempool/stack: not in enabled drivers build config 00:02:14.458 dma/cnxk: not in enabled drivers build config 00:02:14.458 dma/dpaa: not in enabled drivers build config 00:02:14.458 dma/dpaa2: not in enabled drivers build config 00:02:14.458 dma/hisilicon: not in enabled drivers build config 00:02:14.458 dma/idxd: not in enabled drivers build config 00:02:14.458 dma/ioat: not in enabled drivers build config 00:02:14.458 dma/skeleton: not in enabled drivers build config 00:02:14.458 net/af_packet: not in enabled drivers build config 00:02:14.458 net/af_xdp: not in enabled drivers build config 00:02:14.458 net/ark: not in enabled drivers build config 00:02:14.458 net/atlantic: not in enabled drivers build config 00:02:14.458 net/avp: not in enabled drivers build config 00:02:14.458 net/axgbe: not in enabled drivers build config 00:02:14.458 net/bnx2x: not in enabled drivers build config 00:02:14.458 net/bnxt: not in enabled drivers build config 00:02:14.458 net/bonding: not in enabled drivers build config 00:02:14.458 net/cnxk: not in enabled drivers build config 00:02:14.458 net/cpfl: not in enabled drivers build config 00:02:14.458 net/cxgbe: not in enabled drivers build config 00:02:14.458 net/dpaa: not in enabled drivers build config 00:02:14.458 net/dpaa2: not in enabled drivers build config 00:02:14.458 net/e1000: not in enabled drivers build config 00:02:14.458 net/ena: not in enabled drivers build config 00:02:14.458 net/enetc: not in enabled drivers build config 00:02:14.458 net/enetfec: not in enabled drivers build config 00:02:14.458 net/enic: not in enabled drivers build config 00:02:14.458 net/failsafe: not in enabled drivers build config 00:02:14.458 net/fm10k: not in enabled drivers build config 00:02:14.458 net/gve: not in enabled drivers build config 00:02:14.458 net/hinic: not in enabled drivers build config 00:02:14.458 net/hns3: not in enabled drivers build config 00:02:14.458 net/i40e: not in enabled drivers build config 00:02:14.459 net/iavf: not in enabled drivers build config 00:02:14.459 net/ice: not in enabled drivers build config 00:02:14.459 net/idpf: not in enabled drivers build config 00:02:14.459 net/igc: not in enabled drivers build config 00:02:14.459 net/ionic: not in enabled drivers build config 00:02:14.459 net/ipn3ke: not in enabled drivers build config 00:02:14.459 net/ixgbe: not in enabled drivers build config 00:02:14.459 net/mana: not in enabled drivers build config 00:02:14.459 net/memif: not in enabled drivers build config 00:02:14.459 net/mlx4: not in enabled drivers build config 00:02:14.459 net/mlx5: not in enabled drivers build config 00:02:14.459 net/mvneta: not in enabled drivers build config 00:02:14.459 net/mvpp2: not in enabled drivers build config 00:02:14.459 net/netvsc: not in enabled drivers build config 00:02:14.459 net/nfb: not in enabled drivers build config 00:02:14.459 net/nfp: not in enabled drivers build config 00:02:14.459 net/ngbe: not in enabled drivers build config 00:02:14.459 net/null: not in enabled drivers build config 00:02:14.459 net/octeontx: not in enabled drivers build config 00:02:14.459 net/octeon_ep: not in enabled drivers build config 00:02:14.459 net/pcap: not in enabled drivers build config 00:02:14.459 net/pfe: not in enabled drivers build config 00:02:14.459 net/qede: not in enabled drivers build config 00:02:14.459 net/ring: not in enabled drivers build config 00:02:14.459 net/sfc: not in enabled drivers build config 00:02:14.459 net/softnic: not in enabled drivers build config 00:02:14.459 net/tap: not in enabled drivers build config 00:02:14.459 net/thunderx: not in enabled drivers build config 00:02:14.459 net/txgbe: not in enabled drivers build config 00:02:14.459 net/vdev_netvsc: not in enabled drivers build config 00:02:14.459 net/vhost: not in enabled drivers build config 00:02:14.459 net/virtio: not in enabled drivers build config 00:02:14.459 net/vmxnet3: not in enabled drivers build config 00:02:14.459 raw/*: missing internal dependency, "rawdev" 00:02:14.459 crypto/armv8: not in enabled drivers build config 00:02:14.459 crypto/bcmfs: not in enabled drivers build config 00:02:14.459 crypto/caam_jr: not in enabled drivers build config 00:02:14.459 crypto/ccp: not in enabled drivers build config 00:02:14.459 crypto/cnxk: not in enabled drivers build config 00:02:14.459 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.459 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.459 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.459 crypto/mlx5: not in enabled drivers build config 00:02:14.459 crypto/mvsam: not in enabled drivers build config 00:02:14.459 crypto/nitrox: not in enabled drivers build config 00:02:14.459 crypto/null: not in enabled drivers build config 00:02:14.459 crypto/octeontx: not in enabled drivers build config 00:02:14.459 crypto/openssl: not in enabled drivers build config 00:02:14.459 crypto/scheduler: not in enabled drivers build config 00:02:14.459 crypto/uadk: not in enabled drivers build config 00:02:14.459 crypto/virtio: not in enabled drivers build config 00:02:14.459 compress/isal: not in enabled drivers build config 00:02:14.459 compress/mlx5: not in enabled drivers build config 00:02:14.459 compress/nitrox: not in enabled drivers build config 00:02:14.459 compress/octeontx: not in enabled drivers build config 00:02:14.459 compress/zlib: not in enabled drivers build config 00:02:14.459 regex/*: missing internal dependency, "regexdev" 00:02:14.459 ml/*: missing internal dependency, "mldev" 00:02:14.459 vdpa/ifc: not in enabled drivers build config 00:02:14.459 vdpa/mlx5: not in enabled drivers build config 00:02:14.459 vdpa/nfp: not in enabled drivers build config 00:02:14.459 vdpa/sfc: not in enabled drivers build config 00:02:14.459 event/*: missing internal dependency, "eventdev" 00:02:14.459 baseband/*: missing internal dependency, "bbdev" 00:02:14.459 gpu/*: missing internal dependency, "gpudev" 00:02:14.459 00:02:14.459 00:02:14.459 Build targets in project: 84 00:02:14.459 00:02:14.459 DPDK 24.03.0 00:02:14.459 00:02:14.459 User defined options 00:02:14.459 buildtype : debug 00:02:14.459 default_library : shared 00:02:14.459 libdir : lib 00:02:14.459 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:14.459 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.459 c_link_args : 00:02:14.459 cpu_instruction_set: native 00:02:14.459 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:14.459 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:14.459 enable_docs : false 00:02:14.459 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:14.459 enable_kmods : false 00:02:14.459 max_lcores : 128 00:02:14.459 tests : false 00:02:14.459 00:02:14.459 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.459 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:14.725 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.725 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.725 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.725 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.725 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.725 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.725 [7/267] Linking static target lib/librte_kvargs.a 00:02:14.726 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.726 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.726 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.726 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.726 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.726 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.726 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.726 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.726 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.726 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.726 [18/267] Linking static target lib/librte_log.a 00:02:14.726 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.726 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.726 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.726 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.726 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.726 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.726 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.726 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.988 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.988 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.988 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.988 [30/267] Linking static target lib/librte_pci.a 00:02:14.988 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.988 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.988 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.988 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.988 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.988 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.988 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.988 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.248 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.248 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.248 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.248 [42/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.248 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.248 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.248 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.248 [46/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.248 [47/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.248 [48/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.248 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.248 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.248 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.248 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.248 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.248 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.248 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.248 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.248 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.248 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.248 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.248 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.248 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.248 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.248 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.248 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.248 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.248 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.248 [67/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.248 [68/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.248 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.248 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.248 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.248 [72/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:15.248 [73/267] Linking static target lib/librte_ring.a 00:02:15.248 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.248 [75/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.248 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.248 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.248 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.248 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.248 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.248 [81/267] Linking static target lib/librte_meter.a 00:02:15.248 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.248 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.248 [84/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.248 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.248 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.248 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.248 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.248 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.248 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.248 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.248 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.248 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.248 [94/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.248 [95/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.248 [96/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.248 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.248 [98/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.248 [99/267] Linking static target lib/librte_telemetry.a 00:02:15.248 [100/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.248 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.248 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.248 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:15.248 [104/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.248 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.248 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.248 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.248 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.248 [109/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.248 [110/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.248 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.248 [112/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.248 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.248 [114/267] Linking static target lib/librte_cmdline.a 00:02:15.248 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.248 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.248 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.248 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.248 [119/267] Linking static target lib/librte_timer.a 00:02:15.248 [120/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.248 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.248 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.248 [123/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.248 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.248 [125/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.248 [126/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.248 [127/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.248 [128/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.248 [129/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.248 [130/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.249 [131/267] Linking static target lib/librte_rcu.a 00:02:15.249 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.249 [133/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.249 [134/267] Linking static target lib/librte_net.a 00:02:15.249 [135/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.249 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.249 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.249 [138/267] Linking static target lib/librte_mempool.a 00:02:15.509 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.509 [140/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.509 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.509 [142/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.509 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.509 [144/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.509 [145/267] Linking static target lib/librte_power.a 00:02:15.509 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.509 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.510 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.510 [149/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.510 [150/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.510 [151/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.510 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.510 [153/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.510 [154/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.510 [155/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.510 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.510 [157/267] Linking static target lib/librte_reorder.a 00:02:15.510 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.510 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.510 [160/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.510 [161/267] Linking static target lib/librte_dmadev.a 00:02:15.510 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.510 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.510 [164/267] Linking target lib/librte_log.so.24.1 00:02:15.510 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.510 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.510 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.510 [168/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.510 [169/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.510 [170/267] Linking static target lib/librte_compressdev.a 00:02:15.510 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.510 [172/267] Linking static target lib/librte_security.a 00:02:15.510 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.510 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.510 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.510 [176/267] Linking static target lib/librte_eal.a 00:02:15.510 [177/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.510 [178/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.510 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.510 [180/267] Linking static target lib/librte_mbuf.a 00:02:15.510 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.510 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.510 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.510 [184/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.510 [185/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.510 [186/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.510 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.510 [188/267] Linking static target drivers/librte_bus_vdev.a 00:02:15.510 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.772 [190/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.772 [191/267] Linking target lib/librte_kvargs.so.24.1 00:02:15.772 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.772 [193/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.772 [194/267] Linking static target lib/librte_hash.a 00:02:15.772 [195/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.772 [196/267] Linking static target drivers/librte_mempool_ring.a 00:02:15.772 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.772 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.772 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.772 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.772 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.772 [202/267] Linking static target drivers/librte_bus_pci.a 00:02:15.772 [203/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.772 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.772 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.772 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.772 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.772 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.772 [209/267] Linking static target lib/librte_cryptodev.a 00:02:16.034 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.034 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:16.034 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.034 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.034 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:16.034 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.296 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.296 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.296 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.296 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.296 [220/267] Linking static target lib/librte_ethdev.a 00:02:16.296 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.296 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.558 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.558 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.558 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.818 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.394 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.394 [228/267] Linking static target lib/librte_vhost.a 00:02:17.968 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.887 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.481 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.055 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.055 [233/267] Linking target lib/librte_eal.so.24.1 00:02:27.055 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:27.055 [235/267] Linking target lib/librte_timer.so.24.1 00:02:27.055 [236/267] Linking target lib/librte_ring.so.24.1 00:02:27.055 [237/267] Linking target lib/librte_pci.so.24.1 00:02:27.055 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:27.055 [239/267] Linking target lib/librte_meter.so.24.1 00:02:27.055 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:27.316 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:27.316 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:27.316 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:27.316 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:27.316 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:27.316 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:27.316 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:27.316 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:27.578 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:27.578 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:27.578 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:27.578 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:27.578 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:27.578 [254/267] Linking target lib/librte_net.so.24.1 00:02:27.578 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:27.578 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:27.839 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:27.839 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:27.839 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:27.839 [260/267] Linking target lib/librte_hash.so.24.1 00:02:27.839 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:27.839 [262/267] Linking target lib/librte_security.so.24.1 00:02:27.839 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:28.100 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:28.100 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:28.100 [266/267] Linking target lib/librte_power.so.24.1 00:02:28.100 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:28.100 INFO: autodetecting backend as ninja 00:02:28.100 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:31.407 CC lib/log/log.o 00:02:31.407 CC lib/log/log_flags.o 00:02:31.407 CC lib/ut/ut.o 00:02:31.407 CC lib/log/log_deprecated.o 00:02:31.407 CC lib/ut_mock/mock.o 00:02:31.670 LIB libspdk_ut_mock.a 00:02:31.670 LIB libspdk_ut.a 00:02:31.670 LIB libspdk_log.a 00:02:31.670 SO libspdk_ut_mock.so.6.0 00:02:31.670 SO libspdk_ut.so.2.0 00:02:31.670 SO libspdk_log.so.7.1 00:02:31.670 SYMLINK libspdk_ut.so 00:02:31.670 SYMLINK libspdk_ut_mock.so 00:02:31.670 SYMLINK libspdk_log.so 00:02:31.932 CC lib/dma/dma.o 00:02:31.932 CXX lib/trace_parser/trace.o 00:02:31.932 CC lib/util/base64.o 00:02:31.932 CC lib/util/bit_array.o 00:02:31.932 CC lib/ioat/ioat.o 00:02:31.932 CC lib/util/crc32.o 00:02:31.932 CC lib/util/cpuset.o 00:02:31.932 CC lib/util/crc16.o 00:02:31.932 CC lib/util/crc32c.o 00:02:31.932 CC lib/util/crc32_ieee.o 00:02:31.932 CC lib/util/crc64.o 00:02:31.932 CC lib/util/fd_group.o 00:02:31.932 CC lib/util/dif.o 00:02:31.932 CC lib/util/fd.o 00:02:31.932 CC lib/util/file.o 00:02:31.932 CC lib/util/hexlify.o 00:02:31.932 CC lib/util/iov.o 00:02:31.932 CC lib/util/math.o 00:02:31.932 CC lib/util/net.o 00:02:31.932 CC lib/util/pipe.o 00:02:31.932 CC lib/util/strerror_tls.o 00:02:31.932 CC lib/util/string.o 00:02:31.932 CC lib/util/uuid.o 00:02:31.932 CC lib/util/xor.o 00:02:31.932 CC lib/util/zipf.o 00:02:31.932 CC lib/util/md5.o 00:02:32.194 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.194 CC lib/vfio_user/host/vfio_user.o 00:02:32.194 LIB libspdk_dma.a 00:02:32.194 SO libspdk_dma.so.5.0 00:02:32.194 LIB libspdk_ioat.a 00:02:32.457 SO libspdk_ioat.so.7.0 00:02:32.457 SYMLINK libspdk_dma.so 00:02:32.457 SYMLINK libspdk_ioat.so 00:02:32.457 LIB libspdk_vfio_user.a 00:02:32.457 SO libspdk_vfio_user.so.5.0 00:02:32.457 SYMLINK libspdk_vfio_user.so 00:02:32.457 LIB libspdk_util.a 00:02:32.718 SO libspdk_util.so.10.1 00:02:32.718 SYMLINK libspdk_util.so 00:02:32.718 LIB libspdk_trace_parser.a 00:02:32.980 SO libspdk_trace_parser.so.6.0 00:02:32.980 SYMLINK libspdk_trace_parser.so 00:02:33.242 CC lib/rdma_utils/rdma_utils.o 00:02:33.242 CC lib/idxd/idxd.o 00:02:33.242 CC lib/idxd/idxd_user.o 00:02:33.242 CC lib/conf/conf.o 00:02:33.242 CC lib/idxd/idxd_kernel.o 00:02:33.242 CC lib/json/json_parse.o 00:02:33.242 CC lib/json/json_util.o 00:02:33.242 CC lib/json/json_write.o 00:02:33.242 CC lib/env_dpdk/env.o 00:02:33.242 CC lib/vmd/vmd.o 00:02:33.242 CC lib/env_dpdk/memory.o 00:02:33.242 CC lib/env_dpdk/pci.o 00:02:33.242 CC lib/vmd/led.o 00:02:33.242 CC lib/env_dpdk/pci_ioat.o 00:02:33.242 CC lib/env_dpdk/init.o 00:02:33.242 CC lib/env_dpdk/threads.o 00:02:33.242 CC lib/env_dpdk/pci_idxd.o 00:02:33.242 CC lib/env_dpdk/pci_virtio.o 00:02:33.242 CC lib/env_dpdk/pci_vmd.o 00:02:33.242 CC lib/env_dpdk/sigbus_handler.o 00:02:33.242 CC lib/env_dpdk/pci_event.o 00:02:33.242 CC lib/env_dpdk/pci_dpdk.o 00:02:33.242 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.242 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.503 LIB libspdk_conf.a 00:02:33.503 LIB libspdk_rdma_utils.a 00:02:33.503 SO libspdk_conf.so.6.0 00:02:33.503 SO libspdk_rdma_utils.so.1.0 00:02:33.503 LIB libspdk_json.a 00:02:33.503 SYMLINK libspdk_conf.so 00:02:33.503 SO libspdk_json.so.6.0 00:02:33.503 SYMLINK libspdk_rdma_utils.so 00:02:33.503 SYMLINK libspdk_json.so 00:02:33.765 LIB libspdk_idxd.a 00:02:33.765 SO libspdk_idxd.so.12.1 00:02:33.765 LIB libspdk_vmd.a 00:02:33.765 SO libspdk_vmd.so.6.0 00:02:33.765 SYMLINK libspdk_idxd.so 00:02:33.765 CC lib/rdma_provider/common.o 00:02:33.765 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:33.765 SYMLINK libspdk_vmd.so 00:02:34.025 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.025 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.025 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.025 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.025 LIB libspdk_rdma_provider.a 00:02:34.025 SO libspdk_rdma_provider.so.7.0 00:02:34.286 SYMLINK libspdk_rdma_provider.so 00:02:34.286 LIB libspdk_jsonrpc.a 00:02:34.286 SO libspdk_jsonrpc.so.6.0 00:02:34.286 SYMLINK libspdk_jsonrpc.so 00:02:34.286 LIB libspdk_env_dpdk.a 00:02:34.548 SO libspdk_env_dpdk.so.15.1 00:02:34.548 SYMLINK libspdk_env_dpdk.so 00:02:34.548 CC lib/rpc/rpc.o 00:02:34.809 LIB libspdk_rpc.a 00:02:34.809 SO libspdk_rpc.so.6.0 00:02:35.071 SYMLINK libspdk_rpc.so 00:02:35.333 CC lib/keyring/keyring.o 00:02:35.333 CC lib/notify/notify.o 00:02:35.333 CC lib/notify/notify_rpc.o 00:02:35.333 CC lib/keyring/keyring_rpc.o 00:02:35.333 CC lib/trace/trace.o 00:02:35.333 CC lib/trace/trace_flags.o 00:02:35.333 CC lib/trace/trace_rpc.o 00:02:35.594 LIB libspdk_notify.a 00:02:35.594 SO libspdk_notify.so.6.0 00:02:35.594 LIB libspdk_keyring.a 00:02:35.594 LIB libspdk_trace.a 00:02:35.594 SO libspdk_keyring.so.2.0 00:02:35.594 SYMLINK libspdk_notify.so 00:02:35.594 SO libspdk_trace.so.11.0 00:02:35.594 SYMLINK libspdk_keyring.so 00:02:35.595 SYMLINK libspdk_trace.so 00:02:36.168 CC lib/sock/sock.o 00:02:36.168 CC lib/thread/thread.o 00:02:36.168 CC lib/thread/iobuf.o 00:02:36.168 CC lib/sock/sock_rpc.o 00:02:36.429 LIB libspdk_sock.a 00:02:36.429 SO libspdk_sock.so.10.0 00:02:36.429 SYMLINK libspdk_sock.so 00:02:37.001 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.001 CC lib/nvme/nvme_ctrlr.o 00:02:37.001 CC lib/nvme/nvme_ns_cmd.o 00:02:37.001 CC lib/nvme/nvme_fabric.o 00:02:37.001 CC lib/nvme/nvme_ns.o 00:02:37.001 CC lib/nvme/nvme_pcie_common.o 00:02:37.001 CC lib/nvme/nvme_pcie.o 00:02:37.001 CC lib/nvme/nvme_qpair.o 00:02:37.001 CC lib/nvme/nvme.o 00:02:37.001 CC lib/nvme/nvme_quirks.o 00:02:37.001 CC lib/nvme/nvme_transport.o 00:02:37.001 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:37.001 CC lib/nvme/nvme_discovery.o 00:02:37.002 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:37.002 CC lib/nvme/nvme_tcp.o 00:02:37.002 CC lib/nvme/nvme_opal.o 00:02:37.002 CC lib/nvme/nvme_io_msg.o 00:02:37.002 CC lib/nvme/nvme_poll_group.o 00:02:37.002 CC lib/nvme/nvme_zns.o 00:02:37.002 CC lib/nvme/nvme_stubs.o 00:02:37.002 CC lib/nvme/nvme_auth.o 00:02:37.002 CC lib/nvme/nvme_cuse.o 00:02:37.002 CC lib/nvme/nvme_vfio_user.o 00:02:37.002 CC lib/nvme/nvme_rdma.o 00:02:37.573 LIB libspdk_thread.a 00:02:37.573 SO libspdk_thread.so.11.0 00:02:37.573 SYMLINK libspdk_thread.so 00:02:37.834 CC lib/accel/accel.o 00:02:37.834 CC lib/accel/accel_rpc.o 00:02:37.834 CC lib/accel/accel_sw.o 00:02:37.834 CC lib/blob/blobstore.o 00:02:37.834 CC lib/blob/request.o 00:02:37.834 CC lib/fsdev/fsdev_io.o 00:02:37.834 CC lib/blob/zeroes.o 00:02:37.834 CC lib/fsdev/fsdev.o 00:02:37.834 CC lib/blob/blob_bs_dev.o 00:02:37.834 CC lib/init/json_config.o 00:02:37.834 CC lib/init/subsystem.o 00:02:37.834 CC lib/fsdev/fsdev_rpc.o 00:02:37.834 CC lib/init/subsystem_rpc.o 00:02:37.834 CC lib/virtio/virtio.o 00:02:37.834 CC lib/init/rpc.o 00:02:37.834 CC lib/virtio/virtio_vhost_user.o 00:02:37.834 CC lib/virtio/virtio_vfio_user.o 00:02:37.834 CC lib/virtio/virtio_pci.o 00:02:37.834 CC lib/vfu_tgt/tgt_endpoint.o 00:02:37.834 CC lib/vfu_tgt/tgt_rpc.o 00:02:38.095 LIB libspdk_init.a 00:02:38.095 SO libspdk_init.so.6.0 00:02:38.095 LIB libspdk_virtio.a 00:02:38.095 SO libspdk_virtio.so.7.0 00:02:38.095 LIB libspdk_vfu_tgt.a 00:02:38.357 SYMLINK libspdk_init.so 00:02:38.357 SO libspdk_vfu_tgt.so.3.0 00:02:38.357 SYMLINK libspdk_virtio.so 00:02:38.357 SYMLINK libspdk_vfu_tgt.so 00:02:38.357 LIB libspdk_fsdev.a 00:02:38.357 SO libspdk_fsdev.so.2.0 00:02:38.618 SYMLINK libspdk_fsdev.so 00:02:38.618 CC lib/event/app.o 00:02:38.618 CC lib/event/reactor.o 00:02:38.618 CC lib/event/app_rpc.o 00:02:38.618 CC lib/event/log_rpc.o 00:02:38.618 CC lib/event/scheduler_static.o 00:02:38.880 LIB libspdk_accel.a 00:02:38.880 SO libspdk_accel.so.16.0 00:02:38.880 LIB libspdk_nvme.a 00:02:38.880 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:38.880 SYMLINK libspdk_accel.so 00:02:38.880 LIB libspdk_event.a 00:02:38.880 SO libspdk_nvme.so.15.0 00:02:39.142 SO libspdk_event.so.14.0 00:02:39.142 SYMLINK libspdk_event.so 00:02:39.142 SYMLINK libspdk_nvme.so 00:02:39.404 CC lib/bdev/bdev.o 00:02:39.404 CC lib/bdev/bdev_rpc.o 00:02:39.404 CC lib/bdev/bdev_zone.o 00:02:39.404 CC lib/bdev/part.o 00:02:39.404 CC lib/bdev/scsi_nvme.o 00:02:39.404 LIB libspdk_fuse_dispatcher.a 00:02:39.666 SO libspdk_fuse_dispatcher.so.1.0 00:02:39.666 SYMLINK libspdk_fuse_dispatcher.so 00:02:40.611 LIB libspdk_blob.a 00:02:40.611 SO libspdk_blob.so.12.0 00:02:40.611 SYMLINK libspdk_blob.so 00:02:41.184 CC lib/blobfs/blobfs.o 00:02:41.184 CC lib/blobfs/tree.o 00:02:41.184 CC lib/lvol/lvol.o 00:02:41.758 LIB libspdk_bdev.a 00:02:41.758 SO libspdk_bdev.so.17.0 00:02:41.758 LIB libspdk_blobfs.a 00:02:41.758 SYMLINK libspdk_bdev.so 00:02:41.758 SO libspdk_blobfs.so.11.0 00:02:41.758 LIB libspdk_lvol.a 00:02:42.019 SYMLINK libspdk_blobfs.so 00:02:42.019 SO libspdk_lvol.so.11.0 00:02:42.019 SYMLINK libspdk_lvol.so 00:02:42.281 CC lib/ftl/ftl_core.o 00:02:42.281 CC lib/ftl/ftl_init.o 00:02:42.281 CC lib/scsi/dev.o 00:02:42.281 CC lib/ftl/ftl_layout.o 00:02:42.281 CC lib/scsi/lun.o 00:02:42.281 CC lib/ftl/ftl_debug.o 00:02:42.281 CC lib/ublk/ublk.o 00:02:42.281 CC lib/scsi/port.o 00:02:42.281 CC lib/ftl/ftl_io.o 00:02:42.281 CC lib/ublk/ublk_rpc.o 00:02:42.281 CC lib/scsi/scsi.o 00:02:42.281 CC lib/ftl/ftl_sb.o 00:02:42.281 CC lib/scsi/scsi_bdev.o 00:02:42.281 CC lib/ftl/ftl_l2p.o 00:02:42.281 CC lib/scsi/scsi_pr.o 00:02:42.281 CC lib/nvmf/ctrlr.o 00:02:42.281 CC lib/ftl/ftl_l2p_flat.o 00:02:42.281 CC lib/nvmf/ctrlr_discovery.o 00:02:42.281 CC lib/scsi/scsi_rpc.o 00:02:42.281 CC lib/ftl/ftl_nv_cache.o 00:02:42.281 CC lib/scsi/task.o 00:02:42.281 CC lib/nvmf/ctrlr_bdev.o 00:02:42.281 CC lib/ftl/ftl_band.o 00:02:42.281 CC lib/nvmf/subsystem.o 00:02:42.281 CC lib/ftl/ftl_band_ops.o 00:02:42.281 CC lib/ftl/ftl_writer.o 00:02:42.281 CC lib/nvmf/nvmf.o 00:02:42.281 CC lib/nvmf/nvmf_rpc.o 00:02:42.281 CC lib/ftl/ftl_rq.o 00:02:42.281 CC lib/nbd/nbd.o 00:02:42.281 CC lib/ftl/ftl_reloc.o 00:02:42.281 CC lib/nvmf/transport.o 00:02:42.281 CC lib/nbd/nbd_rpc.o 00:02:42.281 CC lib/ftl/ftl_l2p_cache.o 00:02:42.281 CC lib/nvmf/tcp.o 00:02:42.281 CC lib/nvmf/stubs.o 00:02:42.281 CC lib/ftl/ftl_p2l.o 00:02:42.281 CC lib/nvmf/mdns_server.o 00:02:42.281 CC lib/ftl/ftl_p2l_log.o 00:02:42.281 CC lib/nvmf/vfio_user.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt.o 00:02:42.281 CC lib/nvmf/rdma.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:42.281 CC lib/nvmf/auth.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:42.281 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:42.281 CC lib/ftl/utils/ftl_conf.o 00:02:42.281 CC lib/ftl/utils/ftl_md.o 00:02:42.281 CC lib/ftl/utils/ftl_mempool.o 00:02:42.281 CC lib/ftl/utils/ftl_bitmap.o 00:02:42.281 CC lib/ftl/utils/ftl_property.o 00:02:42.281 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:42.281 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:42.281 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:42.281 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:42.281 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:42.281 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:42.281 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:42.281 CC lib/ftl/base/ftl_base_dev.o 00:02:42.281 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:42.281 CC lib/ftl/ftl_trace.o 00:02:42.281 CC lib/ftl/base/ftl_base_bdev.o 00:02:42.852 LIB libspdk_nbd.a 00:02:42.852 SO libspdk_nbd.so.7.0 00:02:42.852 LIB libspdk_scsi.a 00:02:42.852 SYMLINK libspdk_nbd.so 00:02:42.852 SO libspdk_scsi.so.9.0 00:02:42.852 LIB libspdk_ublk.a 00:02:42.852 SYMLINK libspdk_scsi.so 00:02:42.852 SO libspdk_ublk.so.3.0 00:02:43.114 SYMLINK libspdk_ublk.so 00:02:43.114 LIB libspdk_ftl.a 00:02:43.377 CC lib/vhost/vhost.o 00:02:43.377 CC lib/vhost/vhost_rpc.o 00:02:43.377 CC lib/iscsi/conn.o 00:02:43.377 CC lib/vhost/vhost_scsi.o 00:02:43.377 CC lib/vhost/vhost_blk.o 00:02:43.377 CC lib/iscsi/init_grp.o 00:02:43.377 CC lib/iscsi/portal_grp.o 00:02:43.377 CC lib/iscsi/iscsi.o 00:02:43.377 CC lib/vhost/rte_vhost_user.o 00:02:43.377 CC lib/iscsi/tgt_node.o 00:02:43.377 CC lib/iscsi/param.o 00:02:43.377 CC lib/iscsi/iscsi_subsystem.o 00:02:43.377 CC lib/iscsi/iscsi_rpc.o 00:02:43.377 CC lib/iscsi/task.o 00:02:43.377 SO libspdk_ftl.so.9.0 00:02:43.640 SYMLINK libspdk_ftl.so 00:02:44.215 LIB libspdk_nvmf.a 00:02:44.215 SO libspdk_nvmf.so.20.0 00:02:44.215 LIB libspdk_vhost.a 00:02:44.215 SO libspdk_vhost.so.8.0 00:02:44.476 SYMLINK libspdk_nvmf.so 00:02:44.476 SYMLINK libspdk_vhost.so 00:02:44.476 LIB libspdk_iscsi.a 00:02:44.476 SO libspdk_iscsi.so.8.0 00:02:44.739 SYMLINK libspdk_iscsi.so 00:02:45.312 CC module/vfu_device/vfu_virtio.o 00:02:45.312 CC module/vfu_device/vfu_virtio_blk.o 00:02:45.312 CC module/vfu_device/vfu_virtio_scsi.o 00:02:45.312 CC module/vfu_device/vfu_virtio_rpc.o 00:02:45.312 CC module/env_dpdk/env_dpdk_rpc.o 00:02:45.313 CC module/vfu_device/vfu_virtio_fs.o 00:02:45.313 CC module/accel/ioat/accel_ioat.o 00:02:45.313 CC module/accel/ioat/accel_ioat_rpc.o 00:02:45.313 CC module/blob/bdev/blob_bdev.o 00:02:45.313 CC module/sock/posix/posix.o 00:02:45.313 CC module/accel/error/accel_error.o 00:02:45.313 CC module/accel/error/accel_error_rpc.o 00:02:45.313 LIB libspdk_env_dpdk_rpc.a 00:02:45.313 CC module/keyring/linux/keyring.o 00:02:45.313 CC module/accel/dsa/accel_dsa.o 00:02:45.313 CC module/accel/dsa/accel_dsa_rpc.o 00:02:45.313 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:45.313 CC module/keyring/linux/keyring_rpc.o 00:02:45.313 CC module/accel/iaa/accel_iaa.o 00:02:45.313 CC module/fsdev/aio/fsdev_aio.o 00:02:45.313 CC module/accel/iaa/accel_iaa_rpc.o 00:02:45.313 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:45.313 CC module/fsdev/aio/linux_aio_mgr.o 00:02:45.313 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.313 CC module/keyring/file/keyring.o 00:02:45.313 CC module/keyring/file/keyring_rpc.o 00:02:45.574 CC module/scheduler/gscheduler/gscheduler.o 00:02:45.574 SO libspdk_env_dpdk_rpc.so.6.0 00:02:45.574 SYMLINK libspdk_env_dpdk_rpc.so 00:02:45.574 LIB libspdk_accel_ioat.a 00:02:45.574 LIB libspdk_keyring_linux.a 00:02:45.574 LIB libspdk_keyring_file.a 00:02:45.574 SO libspdk_accel_ioat.so.6.0 00:02:45.574 LIB libspdk_scheduler_gscheduler.a 00:02:45.574 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.574 SO libspdk_keyring_linux.so.1.0 00:02:45.574 SO libspdk_keyring_file.so.2.0 00:02:45.574 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:45.574 SO libspdk_scheduler_gscheduler.so.4.0 00:02:45.574 LIB libspdk_scheduler_dynamic.a 00:02:45.574 LIB libspdk_accel_error.a 00:02:45.574 LIB libspdk_accel_iaa.a 00:02:45.574 SYMLINK libspdk_accel_ioat.so 00:02:45.574 SYMLINK libspdk_keyring_linux.so 00:02:45.574 SO libspdk_scheduler_dynamic.so.4.0 00:02:45.574 SO libspdk_accel_error.so.2.0 00:02:45.835 SYMLINK libspdk_keyring_file.so 00:02:45.835 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:45.835 SO libspdk_accel_iaa.so.3.0 00:02:45.835 LIB libspdk_blob_bdev.a 00:02:45.835 LIB libspdk_accel_dsa.a 00:02:45.835 SYMLINK libspdk_scheduler_gscheduler.so 00:02:45.835 SYMLINK libspdk_accel_error.so 00:02:45.835 SO libspdk_blob_bdev.so.12.0 00:02:45.835 SYMLINK libspdk_scheduler_dynamic.so 00:02:45.835 SO libspdk_accel_dsa.so.5.0 00:02:45.835 SYMLINK libspdk_accel_iaa.so 00:02:45.835 LIB libspdk_vfu_device.a 00:02:45.835 SYMLINK libspdk_blob_bdev.so 00:02:45.835 SYMLINK libspdk_accel_dsa.so 00:02:45.835 SO libspdk_vfu_device.so.3.0 00:02:46.097 SYMLINK libspdk_vfu_device.so 00:02:46.097 LIB libspdk_fsdev_aio.a 00:02:46.097 LIB libspdk_sock_posix.a 00:02:46.097 SO libspdk_fsdev_aio.so.1.0 00:02:46.097 SO libspdk_sock_posix.so.6.0 00:02:46.097 SYMLINK libspdk_fsdev_aio.so 00:02:46.097 SYMLINK libspdk_sock_posix.so 00:02:46.359 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.359 CC module/bdev/delay/vbdev_delay.o 00:02:46.359 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.359 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.359 CC module/bdev/error/vbdev_error.o 00:02:46.359 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.359 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.359 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.359 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.359 CC module/bdev/split/vbdev_split.o 00:02:46.359 CC module/bdev/nvme/bdev_nvme.o 00:02:46.359 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.359 CC module/bdev/null/bdev_null.o 00:02:46.359 CC module/bdev/null/bdev_null_rpc.o 00:02:46.359 CC module/bdev/malloc/bdev_malloc.o 00:02:46.359 CC module/bdev/gpt/gpt.o 00:02:46.359 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.359 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.359 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.359 CC module/bdev/nvme/nvme_rpc.o 00:02:46.359 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.359 CC module/bdev/ftl/bdev_ftl.o 00:02:46.359 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.359 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.359 CC module/bdev/nvme/vbdev_opal.o 00:02:46.359 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.359 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.359 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.359 CC module/bdev/raid/bdev_raid.o 00:02:46.359 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.359 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.359 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.359 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.359 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.359 CC module/bdev/raid/raid0.o 00:02:46.359 CC module/bdev/raid/raid1.o 00:02:46.359 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.359 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.359 CC module/bdev/raid/concat.o 00:02:46.359 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.359 CC module/bdev/aio/bdev_aio.o 00:02:46.359 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.620 LIB libspdk_blobfs_bdev.a 00:02:46.620 SO libspdk_blobfs_bdev.so.6.0 00:02:46.620 LIB libspdk_bdev_split.a 00:02:46.620 LIB libspdk_bdev_error.a 00:02:46.620 LIB libspdk_bdev_ftl.a 00:02:46.620 LIB libspdk_bdev_null.a 00:02:46.620 SO libspdk_bdev_error.so.6.0 00:02:46.620 SO libspdk_bdev_split.so.6.0 00:02:46.620 SYMLINK libspdk_blobfs_bdev.so 00:02:46.620 SO libspdk_bdev_ftl.so.6.0 00:02:46.620 SO libspdk_bdev_null.so.6.0 00:02:46.620 LIB libspdk_bdev_passthru.a 00:02:46.620 LIB libspdk_bdev_gpt.a 00:02:46.882 SO libspdk_bdev_passthru.so.6.0 00:02:46.882 SYMLINK libspdk_bdev_split.so 00:02:46.882 SYMLINK libspdk_bdev_error.so 00:02:46.882 SO libspdk_bdev_gpt.so.6.0 00:02:46.882 LIB libspdk_bdev_delay.a 00:02:46.882 SYMLINK libspdk_bdev_null.so 00:02:46.882 LIB libspdk_bdev_aio.a 00:02:46.882 LIB libspdk_bdev_zone_block.a 00:02:46.882 SYMLINK libspdk_bdev_ftl.so 00:02:46.882 LIB libspdk_bdev_malloc.a 00:02:46.882 SYMLINK libspdk_bdev_passthru.so 00:02:46.882 LIB libspdk_bdev_iscsi.a 00:02:46.882 SO libspdk_bdev_delay.so.6.0 00:02:46.882 SO libspdk_bdev_aio.so.6.0 00:02:46.882 SO libspdk_bdev_zone_block.so.6.0 00:02:46.882 SO libspdk_bdev_malloc.so.6.0 00:02:46.882 SYMLINK libspdk_bdev_gpt.so 00:02:46.882 SO libspdk_bdev_iscsi.so.6.0 00:02:46.882 LIB libspdk_bdev_lvol.a 00:02:46.882 SYMLINK libspdk_bdev_delay.so 00:02:46.882 SYMLINK libspdk_bdev_aio.so 00:02:46.882 SYMLINK libspdk_bdev_iscsi.so 00:02:46.882 SYMLINK libspdk_bdev_zone_block.so 00:02:46.882 SYMLINK libspdk_bdev_malloc.so 00:02:46.882 SO libspdk_bdev_lvol.so.6.0 00:02:46.882 LIB libspdk_bdev_virtio.a 00:02:46.882 SO libspdk_bdev_virtio.so.6.0 00:02:46.882 SYMLINK libspdk_bdev_lvol.so 00:02:47.144 SYMLINK libspdk_bdev_virtio.so 00:02:47.407 LIB libspdk_bdev_raid.a 00:02:47.407 SO libspdk_bdev_raid.so.6.0 00:02:47.407 SYMLINK libspdk_bdev_raid.so 00:02:48.797 LIB libspdk_bdev_nvme.a 00:02:48.797 SO libspdk_bdev_nvme.so.7.1 00:02:48.797 SYMLINK libspdk_bdev_nvme.so 00:02:49.744 CC module/event/subsystems/scheduler/scheduler.o 00:02:49.744 CC module/event/subsystems/vmd/vmd.o 00:02:49.744 CC module/event/subsystems/keyring/keyring.o 00:02:49.744 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:49.744 CC module/event/subsystems/sock/sock.o 00:02:49.744 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:49.744 CC module/event/subsystems/iobuf/iobuf.o 00:02:49.744 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:49.744 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:49.744 CC module/event/subsystems/fsdev/fsdev.o 00:02:49.744 LIB libspdk_event_scheduler.a 00:02:49.744 LIB libspdk_event_keyring.a 00:02:49.744 SO libspdk_event_scheduler.so.4.0 00:02:49.744 LIB libspdk_event_vmd.a 00:02:49.744 LIB libspdk_event_vhost_blk.a 00:02:49.744 LIB libspdk_event_fsdev.a 00:02:49.744 LIB libspdk_event_vfu_tgt.a 00:02:49.744 LIB libspdk_event_sock.a 00:02:49.744 LIB libspdk_event_iobuf.a 00:02:49.744 SO libspdk_event_keyring.so.1.0 00:02:49.744 SO libspdk_event_sock.so.5.0 00:02:49.744 SO libspdk_event_vhost_blk.so.3.0 00:02:49.744 SO libspdk_event_vfu_tgt.so.3.0 00:02:49.744 SO libspdk_event_vmd.so.6.0 00:02:49.744 SO libspdk_event_fsdev.so.1.0 00:02:49.744 SO libspdk_event_iobuf.so.3.0 00:02:49.744 SYMLINK libspdk_event_scheduler.so 00:02:49.744 SYMLINK libspdk_event_vfu_tgt.so 00:02:49.744 SYMLINK libspdk_event_keyring.so 00:02:49.744 SYMLINK libspdk_event_vhost_blk.so 00:02:49.744 SYMLINK libspdk_event_fsdev.so 00:02:49.744 SYMLINK libspdk_event_sock.so 00:02:49.744 SYMLINK libspdk_event_vmd.so 00:02:49.744 SYMLINK libspdk_event_iobuf.so 00:02:50.316 CC module/event/subsystems/accel/accel.o 00:02:50.316 LIB libspdk_event_accel.a 00:02:50.316 SO libspdk_event_accel.so.6.0 00:02:50.316 SYMLINK libspdk_event_accel.so 00:02:50.888 CC module/event/subsystems/bdev/bdev.o 00:02:50.888 LIB libspdk_event_bdev.a 00:02:50.888 SO libspdk_event_bdev.so.6.0 00:02:51.150 SYMLINK libspdk_event_bdev.so 00:02:51.411 CC module/event/subsystems/ublk/ublk.o 00:02:51.411 CC module/event/subsystems/nbd/nbd.o 00:02:51.411 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.411 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.411 CC module/event/subsystems/scsi/scsi.o 00:02:51.411 LIB libspdk_event_nbd.a 00:02:51.673 LIB libspdk_event_ublk.a 00:02:51.673 SO libspdk_event_nbd.so.6.0 00:02:51.673 LIB libspdk_event_scsi.a 00:02:51.673 SO libspdk_event_ublk.so.3.0 00:02:51.673 SO libspdk_event_scsi.so.6.0 00:02:51.673 SYMLINK libspdk_event_nbd.so 00:02:51.673 LIB libspdk_event_nvmf.a 00:02:51.673 SYMLINK libspdk_event_ublk.so 00:02:51.673 SO libspdk_event_nvmf.so.6.0 00:02:51.673 SYMLINK libspdk_event_scsi.so 00:02:51.673 SYMLINK libspdk_event_nvmf.so 00:02:51.935 CC module/event/subsystems/iscsi/iscsi.o 00:02:51.935 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.196 LIB libspdk_event_vhost_scsi.a 00:02:52.196 LIB libspdk_event_iscsi.a 00:02:52.196 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.196 SO libspdk_event_iscsi.so.6.0 00:02:52.196 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.456 SYMLINK libspdk_event_iscsi.so 00:02:52.456 SO libspdk.so.6.0 00:02:52.456 SYMLINK libspdk.so 00:02:53.032 CC app/trace_record/trace_record.o 00:02:53.032 CC test/rpc_client/rpc_client_test.o 00:02:53.032 TEST_HEADER include/spdk/accel.h 00:02:53.032 CXX app/trace/trace.o 00:02:53.032 TEST_HEADER include/spdk/accel_module.h 00:02:53.032 TEST_HEADER include/spdk/assert.h 00:02:53.032 TEST_HEADER include/spdk/base64.h 00:02:53.032 TEST_HEADER include/spdk/bdev.h 00:02:53.032 TEST_HEADER include/spdk/barrier.h 00:02:53.032 TEST_HEADER include/spdk/bdev_zone.h 00:02:53.032 CC app/spdk_nvme_perf/perf.o 00:02:53.032 TEST_HEADER include/spdk/bdev_module.h 00:02:53.032 CC app/spdk_lspci/spdk_lspci.o 00:02:53.032 TEST_HEADER include/spdk/bit_array.h 00:02:53.032 TEST_HEADER include/spdk/blob_bdev.h 00:02:53.032 TEST_HEADER include/spdk/bit_pool.h 00:02:53.032 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:53.032 TEST_HEADER include/spdk/blobfs.h 00:02:53.032 TEST_HEADER include/spdk/conf.h 00:02:53.032 TEST_HEADER include/spdk/blob.h 00:02:53.032 CC app/spdk_nvme_discover/discovery_aer.o 00:02:53.032 TEST_HEADER include/spdk/config.h 00:02:53.032 TEST_HEADER include/spdk/cpuset.h 00:02:53.032 TEST_HEADER include/spdk/crc16.h 00:02:53.032 TEST_HEADER include/spdk/crc32.h 00:02:53.032 TEST_HEADER include/spdk/dif.h 00:02:53.032 CC app/spdk_nvme_identify/identify.o 00:02:53.032 CC app/spdk_top/spdk_top.o 00:02:53.032 TEST_HEADER include/spdk/crc64.h 00:02:53.032 TEST_HEADER include/spdk/dma.h 00:02:53.032 TEST_HEADER include/spdk/env_dpdk.h 00:02:53.032 TEST_HEADER include/spdk/endian.h 00:02:53.032 TEST_HEADER include/spdk/env.h 00:02:53.032 TEST_HEADER include/spdk/event.h 00:02:53.032 TEST_HEADER include/spdk/fd_group.h 00:02:53.032 TEST_HEADER include/spdk/file.h 00:02:53.032 TEST_HEADER include/spdk/fd.h 00:02:53.032 TEST_HEADER include/spdk/fsdev.h 00:02:53.032 TEST_HEADER include/spdk/fsdev_module.h 00:02:53.032 TEST_HEADER include/spdk/ftl.h 00:02:53.032 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:53.032 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.032 TEST_HEADER include/spdk/hexlify.h 00:02:53.032 TEST_HEADER include/spdk/idxd.h 00:02:53.032 TEST_HEADER include/spdk/histogram_data.h 00:02:53.032 CC app/nvmf_tgt/nvmf_main.o 00:02:53.032 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.032 TEST_HEADER include/spdk/init.h 00:02:53.032 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.032 TEST_HEADER include/spdk/ioat.h 00:02:53.032 TEST_HEADER include/spdk/json.h 00:02:53.032 CC app/spdk_dd/spdk_dd.o 00:02:53.032 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.032 TEST_HEADER include/spdk/keyring.h 00:02:53.032 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.032 TEST_HEADER include/spdk/keyring_module.h 00:02:53.032 TEST_HEADER include/spdk/likely.h 00:02:53.032 TEST_HEADER include/spdk/log.h 00:02:53.032 TEST_HEADER include/spdk/lvol.h 00:02:53.032 TEST_HEADER include/spdk/md5.h 00:02:53.032 TEST_HEADER include/spdk/memory.h 00:02:53.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.032 TEST_HEADER include/spdk/mmio.h 00:02:53.032 TEST_HEADER include/spdk/nbd.h 00:02:53.032 CC app/iscsi_tgt/iscsi_tgt.o 00:02:53.032 TEST_HEADER include/spdk/net.h 00:02:53.032 TEST_HEADER include/spdk/notify.h 00:02:53.032 TEST_HEADER include/spdk/nvme.h 00:02:53.032 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.032 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.032 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.032 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.032 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.032 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.032 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.032 TEST_HEADER include/spdk/nvmf.h 00:02:53.032 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.032 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.032 TEST_HEADER include/spdk/opal.h 00:02:53.032 TEST_HEADER include/spdk/opal_spec.h 00:02:53.032 TEST_HEADER include/spdk/pci_ids.h 00:02:53.032 TEST_HEADER include/spdk/pipe.h 00:02:53.032 TEST_HEADER include/spdk/queue.h 00:02:53.032 CC app/spdk_tgt/spdk_tgt.o 00:02:53.032 TEST_HEADER include/spdk/reduce.h 00:02:53.032 TEST_HEADER include/spdk/rpc.h 00:02:53.032 TEST_HEADER include/spdk/scheduler.h 00:02:53.032 TEST_HEADER include/spdk/scsi.h 00:02:53.032 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.032 TEST_HEADER include/spdk/sock.h 00:02:53.032 TEST_HEADER include/spdk/stdinc.h 00:02:53.032 TEST_HEADER include/spdk/string.h 00:02:53.032 TEST_HEADER include/spdk/thread.h 00:02:53.032 TEST_HEADER include/spdk/trace.h 00:02:53.032 TEST_HEADER include/spdk/trace_parser.h 00:02:53.032 TEST_HEADER include/spdk/tree.h 00:02:53.032 TEST_HEADER include/spdk/ublk.h 00:02:53.032 TEST_HEADER include/spdk/util.h 00:02:53.032 TEST_HEADER include/spdk/uuid.h 00:02:53.032 TEST_HEADER include/spdk/version.h 00:02:53.032 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.032 TEST_HEADER include/spdk/vhost.h 00:02:53.032 TEST_HEADER include/spdk/vmd.h 00:02:53.032 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.032 TEST_HEADER include/spdk/xor.h 00:02:53.032 TEST_HEADER include/spdk/zipf.h 00:02:53.032 CXX test/cpp_headers/accel.o 00:02:53.032 CXX test/cpp_headers/accel_module.o 00:02:53.032 CXX test/cpp_headers/assert.o 00:02:53.032 CXX test/cpp_headers/barrier.o 00:02:53.032 CXX test/cpp_headers/bdev.o 00:02:53.032 CXX test/cpp_headers/base64.o 00:02:53.032 CXX test/cpp_headers/bdev_module.o 00:02:53.032 CXX test/cpp_headers/bit_pool.o 00:02:53.032 CXX test/cpp_headers/bdev_zone.o 00:02:53.032 CXX test/cpp_headers/bit_array.o 00:02:53.032 CXX test/cpp_headers/blob_bdev.o 00:02:53.032 CXX test/cpp_headers/blobfs_bdev.o 00:02:53.032 CXX test/cpp_headers/blobfs.o 00:02:53.032 CXX test/cpp_headers/blob.o 00:02:53.032 CXX test/cpp_headers/config.o 00:02:53.032 CXX test/cpp_headers/conf.o 00:02:53.032 CXX test/cpp_headers/cpuset.o 00:02:53.032 CXX test/cpp_headers/crc16.o 00:02:53.032 CXX test/cpp_headers/crc32.o 00:02:53.032 CXX test/cpp_headers/crc64.o 00:02:53.032 CXX test/cpp_headers/dif.o 00:02:53.032 CXX test/cpp_headers/env_dpdk.o 00:02:53.033 CXX test/cpp_headers/dma.o 00:02:53.033 CXX test/cpp_headers/env.o 00:02:53.033 CXX test/cpp_headers/endian.o 00:02:53.033 CXX test/cpp_headers/event.o 00:02:53.033 CXX test/cpp_headers/fd_group.o 00:02:53.033 CXX test/cpp_headers/fsdev.o 00:02:53.033 CXX test/cpp_headers/fd.o 00:02:53.033 CXX test/cpp_headers/file.o 00:02:53.033 CXX test/cpp_headers/fsdev_module.o 00:02:53.033 CXX test/cpp_headers/fuse_dispatcher.o 00:02:53.033 CXX test/cpp_headers/ftl.o 00:02:53.033 CXX test/cpp_headers/gpt_spec.o 00:02:53.033 CXX test/cpp_headers/hexlify.o 00:02:53.033 CXX test/cpp_headers/histogram_data.o 00:02:53.033 CXX test/cpp_headers/idxd.o 00:02:53.033 CXX test/cpp_headers/idxd_spec.o 00:02:53.033 CXX test/cpp_headers/ioat.o 00:02:53.033 CXX test/cpp_headers/init.o 00:02:53.033 CXX test/cpp_headers/iscsi_spec.o 00:02:53.033 CXX test/cpp_headers/ioat_spec.o 00:02:53.033 CXX test/cpp_headers/jsonrpc.o 00:02:53.033 CXX test/cpp_headers/keyring.o 00:02:53.033 CXX test/cpp_headers/json.o 00:02:53.033 CXX test/cpp_headers/keyring_module.o 00:02:53.033 CXX test/cpp_headers/likely.o 00:02:53.033 CXX test/cpp_headers/log.o 00:02:53.033 CXX test/cpp_headers/lvol.o 00:02:53.033 CXX test/cpp_headers/md5.o 00:02:53.033 CXX test/cpp_headers/memory.o 00:02:53.033 CXX test/cpp_headers/mmio.o 00:02:53.033 CXX test/cpp_headers/nbd.o 00:02:53.033 CXX test/cpp_headers/notify.o 00:02:53.033 CXX test/cpp_headers/nvme.o 00:02:53.033 CXX test/cpp_headers/net.o 00:02:53.033 CXX test/cpp_headers/nvme_ocssd.o 00:02:53.033 CXX test/cpp_headers/nvme_intel.o 00:02:53.033 CXX test/cpp_headers/nvme_zns.o 00:02:53.033 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:53.033 CC test/app/jsoncat/jsoncat.o 00:02:53.033 CXX test/cpp_headers/nvme_spec.o 00:02:53.033 CXX test/cpp_headers/nvmf.o 00:02:53.033 CXX test/cpp_headers/nvmf_cmd.o 00:02:53.033 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:53.033 CXX test/cpp_headers/opal.o 00:02:53.033 CXX test/cpp_headers/nvmf_transport.o 00:02:53.033 CC test/thread/poller_perf/poller_perf.o 00:02:53.033 CXX test/cpp_headers/nvmf_spec.o 00:02:53.033 CXX test/cpp_headers/opal_spec.o 00:02:53.033 CXX test/cpp_headers/pci_ids.o 00:02:53.033 CXX test/cpp_headers/pipe.o 00:02:53.033 CXX test/cpp_headers/scheduler.o 00:02:53.033 CXX test/cpp_headers/queue.o 00:02:53.033 CXX test/cpp_headers/reduce.o 00:02:53.033 CXX test/cpp_headers/rpc.o 00:02:53.033 CC examples/util/zipf/zipf.o 00:02:53.033 CXX test/cpp_headers/scsi.o 00:02:53.033 CXX test/cpp_headers/scsi_spec.o 00:02:53.033 CXX test/cpp_headers/sock.o 00:02:53.033 LINK spdk_lspci 00:02:53.033 CXX test/cpp_headers/stdinc.o 00:02:53.033 CXX test/cpp_headers/string.o 00:02:53.033 CXX test/cpp_headers/thread.o 00:02:53.033 CXX test/cpp_headers/trace.o 00:02:53.033 CXX test/cpp_headers/tree.o 00:02:53.033 CC test/env/pci/pci_ut.o 00:02:53.033 CC test/app/histogram_perf/histogram_perf.o 00:02:53.033 CXX test/cpp_headers/trace_parser.o 00:02:53.033 CXX test/cpp_headers/util.o 00:02:53.033 CXX test/cpp_headers/ublk.o 00:02:53.033 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:53.033 CXX test/cpp_headers/uuid.o 00:02:53.033 CXX test/cpp_headers/version.o 00:02:53.033 CXX test/cpp_headers/vfio_user_pci.o 00:02:53.033 CC test/env/memory/memory_ut.o 00:02:53.033 CXX test/cpp_headers/vfio_user_spec.o 00:02:53.033 CXX test/cpp_headers/vmd.o 00:02:53.033 CXX test/cpp_headers/xor.o 00:02:53.033 CXX test/cpp_headers/vhost.o 00:02:53.033 CC test/app/stub/stub.o 00:02:53.033 CXX test/cpp_headers/zipf.o 00:02:53.033 CC test/env/vtophys/vtophys.o 00:02:53.033 CC examples/ioat/verify/verify.o 00:02:53.033 CC test/app/bdev_svc/bdev_svc.o 00:02:53.033 CC examples/ioat/perf/perf.o 00:02:53.033 CC test/dma/test_dma/test_dma.o 00:02:53.298 CC app/fio/nvme/fio_plugin.o 00:02:53.298 LINK rpc_client_test 00:02:53.298 CC app/fio/bdev/fio_plugin.o 00:02:53.298 LINK spdk_trace_record 00:02:53.298 LINK spdk_nvme_discover 00:02:53.298 LINK nvmf_tgt 00:02:53.298 LINK interrupt_tgt 00:02:53.557 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.557 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:53.557 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.557 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.557 LINK jsoncat 00:02:53.557 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:53.557 LINK iscsi_tgt 00:02:53.557 LINK stub 00:02:53.557 LINK zipf 00:02:53.557 LINK spdk_tgt 00:02:53.817 LINK poller_perf 00:02:53.817 LINK histogram_perf 00:02:53.817 LINK vtophys 00:02:53.817 LINK spdk_dd 00:02:53.817 LINK bdev_svc 00:02:53.817 LINK env_dpdk_post_init 00:02:53.817 LINK ioat_perf 00:02:53.817 LINK verify 00:02:53.817 LINK spdk_trace 00:02:54.078 LINK nvme_fuzz 00:02:54.078 LINK pci_ut 00:02:54.078 LINK vhost_fuzz 00:02:54.078 CC examples/vmd/led/led.o 00:02:54.078 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.078 CC examples/idxd/perf/perf.o 00:02:54.078 CC examples/thread/thread/thread_ex.o 00:02:54.078 LINK spdk_bdev 00:02:54.078 LINK spdk_nvme 00:02:54.078 CC examples/sock/hello_world/hello_sock.o 00:02:54.078 LINK spdk_nvme_identify 00:02:54.078 LINK test_dma 00:02:54.078 CC test/event/reactor_perf/reactor_perf.o 00:02:54.340 CC test/event/event_perf/event_perf.o 00:02:54.340 CC test/event/reactor/reactor.o 00:02:54.340 CC test/event/app_repeat/app_repeat.o 00:02:54.340 CC test/event/scheduler/scheduler.o 00:02:54.340 LINK spdk_nvme_perf 00:02:54.340 CC app/vhost/vhost.o 00:02:54.340 LINK led 00:02:54.340 LINK lsvmd 00:02:54.340 LINK spdk_top 00:02:54.340 LINK mem_callbacks 00:02:54.340 LINK reactor_perf 00:02:54.340 LINK reactor 00:02:54.340 LINK event_perf 00:02:54.340 LINK app_repeat 00:02:54.340 LINK hello_sock 00:02:54.340 LINK thread 00:02:54.340 LINK idxd_perf 00:02:54.602 LINK vhost 00:02:54.602 LINK scheduler 00:02:54.863 LINK memory_ut 00:02:54.863 CC test/nvme/simple_copy/simple_copy.o 00:02:54.863 CC test/nvme/aer/aer.o 00:02:54.863 CC test/nvme/e2edp/nvme_dp.o 00:02:54.863 CC test/nvme/startup/startup.o 00:02:54.863 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.863 CC test/nvme/overhead/overhead.o 00:02:54.863 CC test/nvme/fdp/fdp.o 00:02:54.863 CC test/accel/dif/dif.o 00:02:54.863 CC test/nvme/connect_stress/connect_stress.o 00:02:54.863 CC test/nvme/reserve/reserve.o 00:02:54.863 CC test/nvme/reset/reset.o 00:02:54.863 CC test/nvme/cuse/cuse.o 00:02:54.863 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.863 CC test/nvme/sgl/sgl.o 00:02:54.863 CC test/nvme/boot_partition/boot_partition.o 00:02:54.863 CC test/nvme/err_injection/err_injection.o 00:02:54.863 CC test/nvme/compliance/nvme_compliance.o 00:02:54.863 CC test/blobfs/mkfs/mkfs.o 00:02:54.863 CC examples/nvme/hello_world/hello_world.o 00:02:54.863 CC examples/nvme/abort/abort.o 00:02:54.863 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:55.125 CC examples/nvme/reconnect/reconnect.o 00:02:55.125 CC examples/nvme/arbitration/arbitration.o 00:02:55.125 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:55.125 CC examples/nvme/hotplug/hotplug.o 00:02:55.125 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:55.125 CC test/lvol/esnap/esnap.o 00:02:55.125 CC examples/accel/perf/accel_perf.o 00:02:55.125 LINK boot_partition 00:02:55.125 LINK startup 00:02:55.125 LINK doorbell_aers 00:02:55.125 CC examples/blob/cli/blobcli.o 00:02:55.125 LINK err_injection 00:02:55.125 LINK fused_ordering 00:02:55.125 LINK connect_stress 00:02:55.125 LINK simple_copy 00:02:55.125 LINK reserve 00:02:55.125 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:55.125 LINK nvme_dp 00:02:55.125 CC examples/blob/hello_world/hello_blob.o 00:02:55.125 LINK reset 00:02:55.125 LINK aer 00:02:55.125 LINK mkfs 00:02:55.125 LINK overhead 00:02:55.125 LINK cmb_copy 00:02:55.125 LINK sgl 00:02:55.125 LINK fdp 00:02:55.125 LINK nvme_compliance 00:02:55.125 LINK pmr_persistence 00:02:55.125 LINK iscsi_fuzz 00:02:55.125 LINK hello_world 00:02:55.125 LINK hotplug 00:02:55.387 LINK arbitration 00:02:55.387 LINK reconnect 00:02:55.387 LINK abort 00:02:55.387 LINK hello_blob 00:02:55.387 LINK hello_fsdev 00:02:55.387 LINK dif 00:02:55.387 LINK nvme_manage 00:02:55.387 LINK accel_perf 00:02:55.649 LINK blobcli 00:02:55.911 LINK cuse 00:02:56.174 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.174 CC examples/bdev/hello_world/hello_bdev.o 00:02:56.174 CC test/bdev/bdevio/bdevio.o 00:02:56.436 LINK hello_bdev 00:02:56.436 LINK bdevio 00:02:56.699 LINK bdevperf 00:02:57.274 CC examples/nvmf/nvmf/nvmf.o 00:02:57.849 LINK nvmf 00:02:59.236 LINK esnap 00:02:59.498 00:02:59.498 real 0m54.573s 00:02:59.498 user 7m48.745s 00:02:59.498 sys 4m25.928s 00:02:59.498 11:17:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:59.498 11:17:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.498 ************************************ 00:02:59.498 END TEST make 00:02:59.498 ************************************ 00:02:59.498 11:17:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.498 11:17:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.498 11:17:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.498 11:17:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.498 11:17:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.498 11:17:51 -- pm/common@44 -- $ pid=3194765 00:02:59.498 11:17:51 -- pm/common@50 -- $ kill -TERM 3194765 00:02:59.498 11:17:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.498 11:17:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.498 11:17:51 -- pm/common@44 -- $ pid=3194766 00:02:59.498 11:17:51 -- pm/common@50 -- $ kill -TERM 3194766 00:02:59.498 11:17:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.498 11:17:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.498 11:17:51 -- pm/common@44 -- $ pid=3194768 00:02:59.498 11:17:51 -- pm/common@50 -- $ kill -TERM 3194768 00:02:59.498 11:17:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.498 11:17:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.498 11:17:51 -- pm/common@44 -- $ pid=3194792 00:02:59.498 11:17:51 -- pm/common@50 -- $ sudo -E kill -TERM 3194792 00:02:59.498 11:17:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:59.498 11:17:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:59.762 11:17:51 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:59.762 11:17:51 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:59.762 11:17:51 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:59.762 11:17:51 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:59.762 11:17:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:59.762 11:17:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:59.762 11:17:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:59.762 11:17:51 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.762 11:17:51 -- scripts/common.sh@336 -- # read -ra ver1 00:02:59.762 11:17:51 -- scripts/common.sh@337 -- # IFS=.-: 00:02:59.762 11:17:51 -- scripts/common.sh@337 -- # read -ra ver2 00:02:59.762 11:17:51 -- scripts/common.sh@338 -- # local 'op=<' 00:02:59.762 11:17:51 -- scripts/common.sh@340 -- # ver1_l=2 00:02:59.762 11:17:51 -- scripts/common.sh@341 -- # ver2_l=1 00:02:59.762 11:17:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:59.762 11:17:51 -- scripts/common.sh@344 -- # case "$op" in 00:02:59.762 11:17:51 -- scripts/common.sh@345 -- # : 1 00:02:59.762 11:17:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:59.762 11:17:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.762 11:17:51 -- scripts/common.sh@365 -- # decimal 1 00:02:59.762 11:17:51 -- scripts/common.sh@353 -- # local d=1 00:02:59.762 11:17:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.762 11:17:51 -- scripts/common.sh@355 -- # echo 1 00:02:59.762 11:17:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:59.762 11:17:51 -- scripts/common.sh@366 -- # decimal 2 00:02:59.762 11:17:51 -- scripts/common.sh@353 -- # local d=2 00:02:59.762 11:17:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.762 11:17:51 -- scripts/common.sh@355 -- # echo 2 00:02:59.762 11:17:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:59.762 11:17:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:59.762 11:17:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:59.762 11:17:51 -- scripts/common.sh@368 -- # return 0 00:02:59.762 11:17:51 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.762 11:17:51 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:59.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.762 --rc genhtml_branch_coverage=1 00:02:59.762 --rc genhtml_function_coverage=1 00:02:59.762 --rc genhtml_legend=1 00:02:59.762 --rc geninfo_all_blocks=1 00:02:59.762 --rc geninfo_unexecuted_blocks=1 00:02:59.762 00:02:59.762 ' 00:02:59.762 11:17:51 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:59.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.762 --rc genhtml_branch_coverage=1 00:02:59.762 --rc genhtml_function_coverage=1 00:02:59.762 --rc genhtml_legend=1 00:02:59.762 --rc geninfo_all_blocks=1 00:02:59.762 --rc geninfo_unexecuted_blocks=1 00:02:59.762 00:02:59.762 ' 00:02:59.762 11:17:51 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:59.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.762 --rc genhtml_branch_coverage=1 00:02:59.762 --rc genhtml_function_coverage=1 00:02:59.762 --rc genhtml_legend=1 00:02:59.762 --rc geninfo_all_blocks=1 00:02:59.762 --rc geninfo_unexecuted_blocks=1 00:02:59.762 00:02:59.762 ' 00:02:59.762 11:17:51 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:59.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.762 --rc genhtml_branch_coverage=1 00:02:59.762 --rc genhtml_function_coverage=1 00:02:59.762 --rc genhtml_legend=1 00:02:59.762 --rc geninfo_all_blocks=1 00:02:59.762 --rc geninfo_unexecuted_blocks=1 00:02:59.762 00:02:59.762 ' 00:02:59.762 11:17:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:59.762 11:17:51 -- nvmf/common.sh@7 -- # uname -s 00:02:59.762 11:17:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.762 11:17:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.762 11:17:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.762 11:17:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.762 11:17:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.762 11:17:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.762 11:17:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.762 11:17:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.762 11:17:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.762 11:17:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.762 11:17:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:59.762 11:17:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:59.762 11:17:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.762 11:17:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.762 11:17:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:59.762 11:17:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:59.762 11:17:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:59.762 11:17:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:59.763 11:17:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.763 11:17:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.763 11:17:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.763 11:17:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.763 11:17:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.763 11:17:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.763 11:17:51 -- paths/export.sh@5 -- # export PATH 00:02:59.763 11:17:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.763 11:17:51 -- nvmf/common.sh@51 -- # : 0 00:02:59.763 11:17:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:59.763 11:17:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:59.763 11:17:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:59.763 11:17:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.763 11:17:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.763 11:17:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:59.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:59.763 11:17:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:59.763 11:17:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:59.763 11:17:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:59.763 11:17:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.763 11:17:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.763 11:17:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.763 11:17:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:59.763 11:17:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.763 11:17:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.763 11:17:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:59.763 11:17:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.763 11:17:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.763 11:17:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:59.763 11:17:51 -- spdk/autotest.sh@48 -- # udevadm_pid=3260131 00:02:59.763 11:17:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:59.763 11:17:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:59.763 11:17:51 -- pm/common@17 -- # local monitor 00:02:59.763 11:17:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.763 11:17:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.763 11:17:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.763 11:17:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.763 11:17:51 -- pm/common@21 -- # date +%s 00:02:59.763 11:17:51 -- pm/common@25 -- # sleep 1 00:02:59.763 11:17:51 -- pm/common@21 -- # date +%s 00:02:59.763 11:17:51 -- pm/common@21 -- # date +%s 00:02:59.763 11:17:51 -- pm/common@21 -- # date +%s 00:02:59.763 11:17:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733739471 00:02:59.763 11:17:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733739471 00:02:59.763 11:17:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733739471 00:02:59.763 11:17:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733739471 00:03:00.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733739471_collect-vmstat.pm.log 00:03:00.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733739471_collect-cpu-load.pm.log 00:03:00.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733739471_collect-cpu-temp.pm.log 00:03:00.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733739471_collect-bmc-pm.bmc.pm.log 00:03:00.971 11:17:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:00.971 11:17:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:00.971 11:17:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:00.971 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:03:00.971 11:17:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:00.971 11:17:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:00.971 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:03:00.971 11:17:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:00.971 11:17:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.971 11:17:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.971 11:17:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:00.971 11:17:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.971 11:17:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:00.971 11:17:52 -- common/autotest_common.sh@1457 -- # uname 00:03:00.971 11:17:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:00.971 11:17:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:00.971 11:17:52 -- common/autotest_common.sh@1477 -- # uname 00:03:00.971 11:17:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:00.971 11:17:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:00.971 11:17:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:00.971 lcov: LCOV version 1.15 00:03:00.971 11:17:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:15.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:15.895 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.033 11:18:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:34.033 11:18:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.033 11:18:23 -- common/autotest_common.sh@10 -- # set +x 00:03:34.033 11:18:23 -- spdk/autotest.sh@78 -- # rm -f 00:03:34.033 11:18:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.615 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:34.615 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:34.615 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:34.876 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:35.138 11:18:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:35.138 11:18:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:35.138 11:18:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:35.138 11:18:27 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:35.138 11:18:27 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:35.138 11:18:27 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:35.138 11:18:27 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:35.138 11:18:27 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:35.138 11:18:27 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:35.138 11:18:27 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:35.138 11:18:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:35.138 11:18:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.138 11:18:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:35.138 11:18:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:35.138 11:18:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.138 11:18:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:35.138 11:18:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:35.138 11:18:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:35.138 11:18:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.138 No valid GPT data, bailing 00:03:35.138 11:18:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.138 11:18:27 -- scripts/common.sh@394 -- # pt= 00:03:35.138 11:18:27 -- scripts/common.sh@395 -- # return 1 00:03:35.138 11:18:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.399 1+0 records in 00:03:35.400 1+0 records out 00:03:35.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618605 s, 170 MB/s 00:03:35.400 11:18:27 -- spdk/autotest.sh@105 -- # sync 00:03:35.400 11:18:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.400 11:18:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.400 11:18:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.545 11:18:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:43.545 11:18:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:43.545 11:18:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:43.545 11:18:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:46.095 Hugepages 00:03:46.095 node hugesize free / total 00:03:46.095 node0 1048576kB 0 / 0 00:03:46.095 node0 2048kB 0 / 0 00:03:46.095 node1 1048576kB 0 / 0 00:03:46.095 node1 2048kB 0 / 0 00:03:46.095 00:03:46.095 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.356 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:46.356 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:46.356 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:46.356 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:46.356 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:46.356 11:18:38 -- spdk/autotest.sh@117 -- # uname -s 00:03:46.356 11:18:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:46.356 11:18:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:46.356 11:18:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.567 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.567 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:51.953 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:51.953 11:18:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:53.339 11:18:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:53.339 11:18:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:53.339 11:18:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.339 11:18:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:53.339 11:18:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.339 11:18:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.339 11:18:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.339 11:18:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.339 11:18:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.339 11:18:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.339 11:18:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:53.339 11:18:45 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.645 Waiting for block devices as requested 00:03:56.645 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:56.645 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:56.645 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:56.907 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:56.907 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:56.907 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:56.907 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:57.168 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:57.168 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:57.430 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:57.430 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:57.430 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:57.691 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:57.691 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:57.691 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:57.691 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:57.953 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:58.215 11:18:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:58.215 11:18:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:58.215 11:18:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:58.215 11:18:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:58.215 11:18:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:58.215 11:18:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:58.215 11:18:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:58.215 11:18:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:58.215 11:18:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:58.215 11:18:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:58.215 11:18:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:58.215 11:18:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:58.215 11:18:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:58.215 11:18:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:58.215 11:18:50 -- common/autotest_common.sh@1543 -- # continue 00:03:58.215 11:18:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:58.215 11:18:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.215 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:03:58.215 11:18:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:58.215 11:18:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.215 11:18:50 -- common/autotest_common.sh@10 -- # set +x 00:03:58.215 11:18:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.432 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:02.432 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:02.432 11:18:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:02.432 11:18:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.432 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.432 11:18:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:02.432 11:18:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:02.432 11:18:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.432 11:18:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:02.432 11:18:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:02.432 11:18:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:02.432 11:18:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:02.432 11:18:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:02.432 11:18:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.432 11:18:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.432 11:18:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.432 11:18:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.432 11:18:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.432 11:18:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:02.432 11:18:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:02.432 11:18:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.432 11:18:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:02.432 11:18:54 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:02.432 11:18:54 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:02.432 11:18:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:02.432 11:18:54 -- common/autotest_common.sh@1572 -- # return 0 00:04:02.432 11:18:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:02.432 11:18:54 -- common/autotest_common.sh@1580 -- # return 0 00:04:02.432 11:18:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.432 11:18:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.432 11:18:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.432 11:18:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.432 11:18:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.432 11:18:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.432 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.432 11:18:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.432 11:18:54 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.432 11:18:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.432 11:18:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.432 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.432 ************************************ 00:04:02.432 START TEST env 00:04:02.432 ************************************ 00:04:02.432 11:18:54 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.695 * Looking for test storage... 00:04:02.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:02.695 11:18:54 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.695 11:18:54 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.695 11:18:54 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.696 11:18:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.696 11:18:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.696 11:18:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.696 11:18:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.696 11:18:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.696 11:18:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.696 11:18:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.696 11:18:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.696 11:18:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.696 11:18:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.696 11:18:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.696 11:18:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.696 11:18:54 env -- scripts/common.sh@345 -- # : 1 00:04:02.696 11:18:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.696 11:18:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.696 11:18:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.696 11:18:54 env -- scripts/common.sh@353 -- # local d=1 00:04:02.696 11:18:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.696 11:18:54 env -- scripts/common.sh@355 -- # echo 1 00:04:02.696 11:18:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.696 11:18:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.696 11:18:54 env -- scripts/common.sh@353 -- # local d=2 00:04:02.696 11:18:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.696 11:18:54 env -- scripts/common.sh@355 -- # echo 2 00:04:02.696 11:18:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.696 11:18:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.696 11:18:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.696 11:18:54 env -- scripts/common.sh@368 -- # return 0 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.696 --rc genhtml_branch_coverage=1 00:04:02.696 --rc genhtml_function_coverage=1 00:04:02.696 --rc genhtml_legend=1 00:04:02.696 --rc geninfo_all_blocks=1 00:04:02.696 --rc geninfo_unexecuted_blocks=1 00:04:02.696 00:04:02.696 ' 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.696 --rc genhtml_branch_coverage=1 00:04:02.696 --rc genhtml_function_coverage=1 00:04:02.696 --rc genhtml_legend=1 00:04:02.696 --rc geninfo_all_blocks=1 00:04:02.696 --rc geninfo_unexecuted_blocks=1 00:04:02.696 00:04:02.696 ' 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.696 --rc genhtml_branch_coverage=1 00:04:02.696 --rc genhtml_function_coverage=1 00:04:02.696 --rc genhtml_legend=1 00:04:02.696 --rc geninfo_all_blocks=1 00:04:02.696 --rc geninfo_unexecuted_blocks=1 00:04:02.696 00:04:02.696 ' 00:04:02.696 11:18:54 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.696 --rc genhtml_branch_coverage=1 00:04:02.696 --rc genhtml_function_coverage=1 00:04:02.696 --rc genhtml_legend=1 00:04:02.696 --rc geninfo_all_blocks=1 00:04:02.696 --rc geninfo_unexecuted_blocks=1 00:04:02.696 00:04:02.696 ' 00:04:02.696 11:18:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.697 11:18:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.697 11:18:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.697 11:18:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.697 ************************************ 00:04:02.697 START TEST env_memory 00:04:02.697 ************************************ 00:04:02.697 11:18:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.697 00:04:02.697 00:04:02.697 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.697 http://cunit.sourceforge.net/ 00:04:02.697 00:04:02.697 00:04:02.697 Suite: memory 00:04:02.697 Test: alloc and free memory map ...[2024-12-09 11:18:54.834800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.697 passed 00:04:02.960 Test: mem map translation ...[2024-12-09 11:18:54.861655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.960 [2024-12-09 11:18:54.861674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.960 [2024-12-09 11:18:54.861720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.960 [2024-12-09 11:18:54.861731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.960 passed 00:04:02.960 Test: mem map registration ...[2024-12-09 11:18:54.923728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.960 [2024-12-09 11:18:54.923744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.960 passed 00:04:02.960 Test: mem map adjacent registrations ...passed 00:04:02.960 00:04:02.960 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.960 suites 1 1 n/a 0 0 00:04:02.960 tests 4 4 4 0 0 00:04:02.960 asserts 152 152 152 0 n/a 00:04:02.960 00:04:02.960 Elapsed time = 0.203 seconds 00:04:02.960 00:04:02.960 real 0m0.218s 00:04:02.960 user 0m0.209s 00:04:02.960 sys 0m0.009s 00:04:02.960 11:18:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.960 11:18:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.960 ************************************ 00:04:02.960 END TEST env_memory 00:04:02.960 ************************************ 00:04:02.960 11:18:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.960 11:18:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.960 11:18:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.960 11:18:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.960 ************************************ 00:04:02.960 START TEST env_vtophys 00:04:02.960 ************************************ 00:04:02.960 11:18:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.960 EAL: lib.eal log level changed from notice to debug 00:04:02.960 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.960 EAL: Detected lcore 1 as core 1 on socket 0 00:04:02.960 EAL: Detected lcore 2 as core 2 on socket 0 00:04:02.960 EAL: Detected lcore 3 as core 3 on socket 0 00:04:02.960 EAL: Detected lcore 4 as core 4 on socket 0 00:04:02.960 EAL: Detected lcore 5 as core 5 on socket 0 00:04:02.960 EAL: Detected lcore 6 as core 6 on socket 0 00:04:02.960 EAL: Detected lcore 7 as core 7 on socket 0 00:04:02.960 EAL: Detected lcore 8 as core 8 on socket 0 00:04:02.960 EAL: Detected lcore 9 as core 9 on socket 0 00:04:02.960 EAL: Detected lcore 10 as core 10 on socket 0 00:04:02.960 EAL: Detected lcore 11 as core 11 on socket 0 00:04:02.960 EAL: Detected lcore 12 as core 12 on socket 0 00:04:02.960 EAL: Detected lcore 13 as core 13 on socket 0 00:04:02.960 EAL: Detected lcore 14 as core 14 on socket 0 00:04:02.960 EAL: Detected lcore 15 as core 15 on socket 0 00:04:02.960 EAL: Detected lcore 16 as core 16 on socket 0 00:04:02.960 EAL: Detected lcore 17 as core 17 on socket 0 00:04:02.960 EAL: Detected lcore 18 as core 18 on socket 0 00:04:02.960 EAL: Detected lcore 19 as core 19 on socket 0 00:04:02.960 EAL: Detected lcore 20 as core 20 on socket 0 00:04:02.960 EAL: Detected lcore 21 as core 21 on socket 0 00:04:02.960 EAL: Detected lcore 22 as core 22 on socket 0 00:04:02.960 EAL: Detected lcore 23 as core 23 on socket 0 00:04:02.960 EAL: Detected lcore 24 as core 24 on socket 0 00:04:02.960 EAL: Detected lcore 25 as core 25 on socket 0 00:04:02.960 EAL: Detected lcore 26 as core 26 on socket 0 00:04:02.960 EAL: Detected lcore 27 as core 27 on socket 0 00:04:02.960 EAL: Detected lcore 28 as core 28 on socket 0 00:04:02.960 EAL: Detected lcore 29 as core 29 on socket 0 00:04:02.960 EAL: Detected lcore 30 as core 30 on socket 0 00:04:02.960 EAL: Detected lcore 31 as core 31 on socket 0 00:04:02.960 EAL: Detected lcore 32 as core 32 on socket 0 00:04:02.960 EAL: Detected lcore 33 as core 33 on socket 0 00:04:02.960 EAL: Detected lcore 34 as core 34 on socket 0 00:04:02.960 EAL: Detected lcore 35 as core 35 on socket 0 00:04:02.960 EAL: Detected lcore 36 as core 0 on socket 1 00:04:02.960 EAL: Detected lcore 37 as core 1 on socket 1 00:04:02.960 EAL: Detected lcore 38 as core 2 on socket 1 00:04:02.960 EAL: Detected lcore 39 as core 3 on socket 1 00:04:02.960 EAL: Detected lcore 40 as core 4 on socket 1 00:04:02.960 EAL: Detected lcore 41 as core 5 on socket 1 00:04:02.960 EAL: Detected lcore 42 as core 6 on socket 1 00:04:02.960 EAL: Detected lcore 43 as core 7 on socket 1 00:04:02.960 EAL: Detected lcore 44 as core 8 on socket 1 00:04:02.960 EAL: Detected lcore 45 as core 9 on socket 1 00:04:02.960 EAL: Detected lcore 46 as core 10 on socket 1 00:04:02.960 EAL: Detected lcore 47 as core 11 on socket 1 00:04:02.960 EAL: Detected lcore 48 as core 12 on socket 1 00:04:02.960 EAL: Detected lcore 49 as core 13 on socket 1 00:04:02.960 EAL: Detected lcore 50 as core 14 on socket 1 00:04:02.960 EAL: Detected lcore 51 as core 15 on socket 1 00:04:02.960 EAL: Detected lcore 52 as core 16 on socket 1 00:04:02.960 EAL: Detected lcore 53 as core 17 on socket 1 00:04:02.960 EAL: Detected lcore 54 as core 18 on socket 1 00:04:02.960 EAL: Detected lcore 55 as core 19 on socket 1 00:04:02.960 EAL: Detected lcore 56 as core 20 on socket 1 00:04:02.960 EAL: Detected lcore 57 as core 21 on socket 1 00:04:02.960 EAL: Detected lcore 58 as core 22 on socket 1 00:04:02.960 EAL: Detected lcore 59 as core 23 on socket 1 00:04:02.960 EAL: Detected lcore 60 as core 24 on socket 1 00:04:02.960 EAL: Detected lcore 61 as core 25 on socket 1 00:04:02.960 EAL: Detected lcore 62 as core 26 on socket 1 00:04:02.960 EAL: Detected lcore 63 as core 27 on socket 1 00:04:02.960 EAL: Detected lcore 64 as core 28 on socket 1 00:04:02.960 EAL: Detected lcore 65 as core 29 on socket 1 00:04:02.960 EAL: Detected lcore 66 as core 30 on socket 1 00:04:02.960 EAL: Detected lcore 67 as core 31 on socket 1 00:04:02.960 EAL: Detected lcore 68 as core 32 on socket 1 00:04:02.960 EAL: Detected lcore 69 as core 33 on socket 1 00:04:02.960 EAL: Detected lcore 70 as core 34 on socket 1 00:04:02.960 EAL: Detected lcore 71 as core 35 on socket 1 00:04:02.960 EAL: Detected lcore 72 as core 0 on socket 0 00:04:02.960 EAL: Detected lcore 73 as core 1 on socket 0 00:04:02.960 EAL: Detected lcore 74 as core 2 on socket 0 00:04:02.960 EAL: Detected lcore 75 as core 3 on socket 0 00:04:02.960 EAL: Detected lcore 76 as core 4 on socket 0 00:04:02.960 EAL: Detected lcore 77 as core 5 on socket 0 00:04:02.960 EAL: Detected lcore 78 as core 6 on socket 0 00:04:02.960 EAL: Detected lcore 79 as core 7 on socket 0 00:04:02.960 EAL: Detected lcore 80 as core 8 on socket 0 00:04:02.960 EAL: Detected lcore 81 as core 9 on socket 0 00:04:02.960 EAL: Detected lcore 82 as core 10 on socket 0 00:04:02.960 EAL: Detected lcore 83 as core 11 on socket 0 00:04:02.961 EAL: Detected lcore 84 as core 12 on socket 0 00:04:02.961 EAL: Detected lcore 85 as core 13 on socket 0 00:04:02.961 EAL: Detected lcore 86 as core 14 on socket 0 00:04:02.961 EAL: Detected lcore 87 as core 15 on socket 0 00:04:02.961 EAL: Detected lcore 88 as core 16 on socket 0 00:04:02.961 EAL: Detected lcore 89 as core 17 on socket 0 00:04:02.961 EAL: Detected lcore 90 as core 18 on socket 0 00:04:02.961 EAL: Detected lcore 91 as core 19 on socket 0 00:04:02.961 EAL: Detected lcore 92 as core 20 on socket 0 00:04:02.961 EAL: Detected lcore 93 as core 21 on socket 0 00:04:02.961 EAL: Detected lcore 94 as core 22 on socket 0 00:04:02.961 EAL: Detected lcore 95 as core 23 on socket 0 00:04:02.961 EAL: Detected lcore 96 as core 24 on socket 0 00:04:02.961 EAL: Detected lcore 97 as core 25 on socket 0 00:04:02.961 EAL: Detected lcore 98 as core 26 on socket 0 00:04:02.961 EAL: Detected lcore 99 as core 27 on socket 0 00:04:02.961 EAL: Detected lcore 100 as core 28 on socket 0 00:04:02.961 EAL: Detected lcore 101 as core 29 on socket 0 00:04:02.961 EAL: Detected lcore 102 as core 30 on socket 0 00:04:02.961 EAL: Detected lcore 103 as core 31 on socket 0 00:04:02.961 EAL: Detected lcore 104 as core 32 on socket 0 00:04:02.961 EAL: Detected lcore 105 as core 33 on socket 0 00:04:02.961 EAL: Detected lcore 106 as core 34 on socket 0 00:04:02.961 EAL: Detected lcore 107 as core 35 on socket 0 00:04:02.961 EAL: Detected lcore 108 as core 0 on socket 1 00:04:02.961 EAL: Detected lcore 109 as core 1 on socket 1 00:04:02.961 EAL: Detected lcore 110 as core 2 on socket 1 00:04:02.961 EAL: Detected lcore 111 as core 3 on socket 1 00:04:02.961 EAL: Detected lcore 112 as core 4 on socket 1 00:04:02.961 EAL: Detected lcore 113 as core 5 on socket 1 00:04:02.961 EAL: Detected lcore 114 as core 6 on socket 1 00:04:02.961 EAL: Detected lcore 115 as core 7 on socket 1 00:04:02.961 EAL: Detected lcore 116 as core 8 on socket 1 00:04:02.961 EAL: Detected lcore 117 as core 9 on socket 1 00:04:02.961 EAL: Detected lcore 118 as core 10 on socket 1 00:04:02.961 EAL: Detected lcore 119 as core 11 on socket 1 00:04:02.961 EAL: Detected lcore 120 as core 12 on socket 1 00:04:02.961 EAL: Detected lcore 121 as core 13 on socket 1 00:04:02.961 EAL: Detected lcore 122 as core 14 on socket 1 00:04:02.961 EAL: Detected lcore 123 as core 15 on socket 1 00:04:02.961 EAL: Detected lcore 124 as core 16 on socket 1 00:04:02.961 EAL: Detected lcore 125 as core 17 on socket 1 00:04:02.961 EAL: Detected lcore 126 as core 18 on socket 1 00:04:02.961 EAL: Detected lcore 127 as core 19 on socket 1 00:04:02.961 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:02.961 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:02.961 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:02.961 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:02.961 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:02.961 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:02.961 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:02.961 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:02.961 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:02.961 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:02.961 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:02.961 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:02.961 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:02.961 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:02.961 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:02.961 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:02.961 EAL: Maximum logical cores by configuration: 128 00:04:02.961 EAL: Detected CPU lcores: 128 00:04:02.961 EAL: Detected NUMA nodes: 2 00:04:02.961 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.961 EAL: Detected shared linkage of DPDK 00:04:02.961 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.961 EAL: Bus pci wants IOVA as 'DC' 00:04:02.961 EAL: Buses did not request a specific IOVA mode. 00:04:02.961 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:02.961 EAL: Selected IOVA mode 'VA' 00:04:02.961 EAL: Probing VFIO support... 00:04:02.961 EAL: IOMMU type 1 (Type 1) is supported 00:04:02.961 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:02.961 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:02.961 EAL: VFIO support initialized 00:04:02.961 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.961 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.961 EAL: Setting up physically contiguous memory... 00:04:02.961 EAL: Setting maximum number of open files to 524288 00:04:02.961 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.961 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:02.961 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.961 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:02.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.961 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:02.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.961 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:02.961 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:02.961 EAL: Hugepages will be freed exactly as allocated. 00:04:02.961 EAL: No shared files mode enabled, IPC is disabled 00:04:02.961 EAL: No shared files mode enabled, IPC is disabled 00:04:02.961 EAL: TSC frequency is ~2400000 KHz 00:04:02.961 EAL: Main lcore 0 is ready (tid=7fc59015da00;cpuset=[0]) 00:04:02.961 EAL: Trying to obtain current memory policy. 00:04:02.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.961 EAL: Restoring previous memory policy: 0 00:04:02.961 EAL: request: mp_malloc_sync 00:04:02.961 EAL: No shared files mode enabled, IPC is disabled 00:04:02.961 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.961 EAL: No shared files mode enabled, IPC is disabled 00:04:03.224 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.224 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.224 00:04:03.224 00:04:03.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.224 http://cunit.sourceforge.net/ 00:04:03.224 00:04:03.224 00:04:03.224 Suite: components_suite 00:04:03.224 Test: vtophys_malloc_test ...passed 00:04:03.224 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.224 EAL: Restoring previous memory policy: 4 00:04:03.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.224 EAL: request: mp_malloc_sync 00:04:03.224 EAL: No shared files mode enabled, IPC is disabled 00:04:03.224 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.224 EAL: request: mp_malloc_sync 00:04:03.224 EAL: No shared files mode enabled, IPC is disabled 00:04:03.224 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.224 EAL: Trying to obtain current memory policy. 00:04:03.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.224 EAL: Restoring previous memory policy: 4 00:04:03.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 130MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.225 EAL: Restoring previous memory policy: 4 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was expanded by 258MB 00:04:03.225 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.225 EAL: request: mp_malloc_sync 00:04:03.225 EAL: No shared files mode enabled, IPC is disabled 00:04:03.225 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.225 EAL: Trying to obtain current memory policy. 00:04:03.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.485 EAL: Restoring previous memory policy: 4 00:04:03.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.485 EAL: request: mp_malloc_sync 00:04:03.485 EAL: No shared files mode enabled, IPC is disabled 00:04:03.485 EAL: Heap on socket 0 was expanded by 514MB 00:04:03.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.485 EAL: request: mp_malloc_sync 00:04:03.485 EAL: No shared files mode enabled, IPC is disabled 00:04:03.485 EAL: Heap on socket 0 was shrunk by 514MB 00:04:03.485 EAL: Trying to obtain current memory policy. 00:04:03.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.485 EAL: Restoring previous memory policy: 4 00:04:03.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.485 EAL: request: mp_malloc_sync 00:04:03.485 EAL: No shared files mode enabled, IPC is disabled 00:04:03.485 EAL: Heap on socket 0 was expanded by 1026MB 00:04:03.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.745 EAL: request: mp_malloc_sync 00:04:03.745 EAL: No shared files mode enabled, IPC is disabled 00:04:03.745 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:03.745 passed 00:04:03.745 00:04:03.745 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.745 suites 1 1 n/a 0 0 00:04:03.745 tests 2 2 2 0 0 00:04:03.745 asserts 497 497 497 0 n/a 00:04:03.745 00:04:03.745 Elapsed time = 0.646 seconds 00:04:03.745 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.745 EAL: request: mp_malloc_sync 00:04:03.745 EAL: No shared files mode enabled, IPC is disabled 00:04:03.745 EAL: Heap on socket 0 was shrunk by 2MB 00:04:03.745 EAL: No shared files mode enabled, IPC is disabled 00:04:03.745 EAL: No shared files mode enabled, IPC is disabled 00:04:03.745 EAL: No shared files mode enabled, IPC is disabled 00:04:03.745 00:04:03.745 real 0m0.780s 00:04:03.745 user 0m0.414s 00:04:03.745 sys 0m0.333s 00:04:03.745 11:18:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.745 11:18:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:03.745 ************************************ 00:04:03.745 END TEST env_vtophys 00:04:03.745 ************************************ 00:04:03.745 11:18:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.745 11:18:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.745 11:18:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.745 11:18:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 START TEST env_pci 00:04:04.007 ************************************ 00:04:04.007 11:18:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:04.007 00:04:04.007 00:04:04.007 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.007 http://cunit.sourceforge.net/ 00:04:04.007 00:04:04.007 00:04:04.007 Suite: pci 00:04:04.007 Test: pci_hook ...[2024-12-09 11:18:55.934126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3279902 has claimed it 00:04:04.007 EAL: Cannot find device (10000:00:01.0) 00:04:04.007 EAL: Failed to attach device on primary process 00:04:04.007 passed 00:04:04.007 00:04:04.007 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.007 suites 1 1 n/a 0 0 00:04:04.007 tests 1 1 1 0 0 00:04:04.007 asserts 25 25 25 0 n/a 00:04:04.007 00:04:04.007 Elapsed time = 0.031 seconds 00:04:04.007 00:04:04.007 real 0m0.052s 00:04:04.007 user 0m0.015s 00:04:04.007 sys 0m0.036s 00:04:04.007 11:18:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.007 11:18:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 END TEST env_pci 00:04:04.007 ************************************ 00:04:04.007 11:18:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.007 11:18:56 env -- env/env.sh@15 -- # uname 00:04:04.007 11:18:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.007 11:18:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.007 11:18:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.007 11:18:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:04.007 11:18:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.007 11:18:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 START TEST env_dpdk_post_init 00:04:04.007 ************************************ 00:04:04.007 11:18:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.007 EAL: Detected CPU lcores: 128 00:04:04.007 EAL: Detected NUMA nodes: 2 00:04:04.007 EAL: Detected shared linkage of DPDK 00:04:04.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.007 EAL: Selected IOVA mode 'VA' 00:04:04.007 EAL: VFIO support initialized 00:04:04.007 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.269 EAL: Using IOMMU type 1 (Type 1) 00:04:04.269 EAL: Ignore mapping IO port bar(1) 00:04:04.269 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:04.530 EAL: Ignore mapping IO port bar(1) 00:04:04.530 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:04.792 EAL: Ignore mapping IO port bar(1) 00:04:04.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:05.054 EAL: Ignore mapping IO port bar(1) 00:04:05.054 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:05.054 EAL: Ignore mapping IO port bar(1) 00:04:05.316 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:05.316 EAL: Ignore mapping IO port bar(1) 00:04:05.578 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:05.578 EAL: Ignore mapping IO port bar(1) 00:04:05.839 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:05.839 EAL: Ignore mapping IO port bar(1) 00:04:05.839 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:06.100 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:06.363 EAL: Ignore mapping IO port bar(1) 00:04:06.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:06.624 EAL: Ignore mapping IO port bar(1) 00:04:06.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:06.624 EAL: Ignore mapping IO port bar(1) 00:04:06.885 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:06.885 EAL: Ignore mapping IO port bar(1) 00:04:07.146 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:07.146 EAL: Ignore mapping IO port bar(1) 00:04:07.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:07.407 EAL: Ignore mapping IO port bar(1) 00:04:07.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:07.668 EAL: Ignore mapping IO port bar(1) 00:04:07.668 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:07.931 EAL: Ignore mapping IO port bar(1) 00:04:07.931 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:07.931 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:07.931 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:08.191 Starting DPDK initialization... 00:04:08.191 Starting SPDK post initialization... 00:04:08.191 SPDK NVMe probe 00:04:08.191 Attaching to 0000:65:00.0 00:04:08.191 Attached to 0000:65:00.0 00:04:08.191 Cleaning up... 00:04:10.106 00:04:10.106 real 0m5.743s 00:04:10.106 user 0m0.117s 00:04:10.106 sys 0m0.172s 00:04:10.106 11:19:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.106 11:19:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.106 ************************************ 00:04:10.106 END TEST env_dpdk_post_init 00:04:10.106 ************************************ 00:04:10.106 11:19:01 env -- env/env.sh@26 -- # uname 00:04:10.106 11:19:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.106 11:19:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.106 11:19:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.106 11:19:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.106 11:19:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.106 ************************************ 00:04:10.106 START TEST env_mem_callbacks 00:04:10.106 ************************************ 00:04:10.106 11:19:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.106 EAL: Detected CPU lcores: 128 00:04:10.106 EAL: Detected NUMA nodes: 2 00:04:10.106 EAL: Detected shared linkage of DPDK 00:04:10.106 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.106 EAL: Selected IOVA mode 'VA' 00:04:10.106 EAL: VFIO support initialized 00:04:10.106 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.106 00:04:10.106 00:04:10.106 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.106 http://cunit.sourceforge.net/ 00:04:10.106 00:04:10.106 00:04:10.106 Suite: memory 00:04:10.106 Test: test ... 00:04:10.106 register 0x200000200000 2097152 00:04:10.106 malloc 3145728 00:04:10.106 register 0x200000400000 4194304 00:04:10.106 buf 0x200000500000 len 3145728 PASSED 00:04:10.106 malloc 64 00:04:10.106 buf 0x2000004fff40 len 64 PASSED 00:04:10.106 malloc 4194304 00:04:10.106 register 0x200000800000 6291456 00:04:10.106 buf 0x200000a00000 len 4194304 PASSED 00:04:10.106 free 0x200000500000 3145728 00:04:10.106 free 0x2000004fff40 64 00:04:10.106 unregister 0x200000400000 4194304 PASSED 00:04:10.106 free 0x200000a00000 4194304 00:04:10.106 unregister 0x200000800000 6291456 PASSED 00:04:10.106 malloc 8388608 00:04:10.106 register 0x200000400000 10485760 00:04:10.107 buf 0x200000600000 len 8388608 PASSED 00:04:10.107 free 0x200000600000 8388608 00:04:10.107 unregister 0x200000400000 10485760 PASSED 00:04:10.107 passed 00:04:10.107 00:04:10.107 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.107 suites 1 1 n/a 0 0 00:04:10.107 tests 1 1 1 0 0 00:04:10.107 asserts 15 15 15 0 n/a 00:04:10.107 00:04:10.107 Elapsed time = 0.005 seconds 00:04:10.107 00:04:10.107 real 0m0.060s 00:04:10.107 user 0m0.017s 00:04:10.107 sys 0m0.043s 00:04:10.107 11:19:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.107 11:19:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:10.107 ************************************ 00:04:10.107 END TEST env_mem_callbacks 00:04:10.107 ************************************ 00:04:10.107 00:04:10.107 real 0m7.437s 00:04:10.107 user 0m1.031s 00:04:10.107 sys 0m0.952s 00:04:10.107 11:19:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.107 11:19:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.107 ************************************ 00:04:10.107 END TEST env 00:04:10.107 ************************************ 00:04:10.107 11:19:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:10.107 11:19:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.107 11:19:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.107 11:19:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.107 ************************************ 00:04:10.107 START TEST rpc 00:04:10.107 ************************************ 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:10.107 * Looking for test storage... 00:04:10.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.107 11:19:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.107 11:19:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.107 11:19:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.107 11:19:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.107 11:19:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.107 11:19:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.107 11:19:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.107 11:19:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.107 11:19:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.107 11:19:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.107 11:19:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.107 11:19:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.107 11:19:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.107 11:19:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.107 11:19:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.107 --rc genhtml_branch_coverage=1 00:04:10.107 --rc genhtml_function_coverage=1 00:04:10.107 --rc genhtml_legend=1 00:04:10.107 --rc geninfo_all_blocks=1 00:04:10.107 --rc geninfo_unexecuted_blocks=1 00:04:10.107 00:04:10.107 ' 00:04:10.107 11:19:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3281339 00:04:10.107 11:19:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.107 11:19:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3281339 00:04:10.107 11:19:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 3281339 ']' 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.107 11:19:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.368 [2024-12-09 11:19:02.309961] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:10.368 [2024-12-09 11:19:02.310040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281339 ] 00:04:10.368 [2024-12-09 11:19:02.388598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.368 [2024-12-09 11:19:02.429580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.368 [2024-12-09 11:19:02.429620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3281339' to capture a snapshot of events at runtime. 00:04:10.368 [2024-12-09 11:19:02.429629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:10.368 [2024-12-09 11:19:02.429636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:10.368 [2024-12-09 11:19:02.429642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3281339 for offline analysis/debug. 00:04:10.368 [2024-12-09 11:19:02.430270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.312 11:19:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.312 11:19:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:11.312 11:19:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.312 11:19:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.312 11:19:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.312 11:19:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.312 11:19:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.312 11:19:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.312 11:19:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.312 ************************************ 00:04:11.312 START TEST rpc_integrity 00:04:11.312 ************************************ 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.312 { 00:04:11.312 "name": "Malloc0", 00:04:11.312 "aliases": [ 00:04:11.312 "5a6506c4-0ba3-413a-9047-97d9d5ed3c9d" 00:04:11.312 ], 00:04:11.312 "product_name": "Malloc disk", 00:04:11.312 "block_size": 512, 00:04:11.312 "num_blocks": 16384, 00:04:11.312 "uuid": "5a6506c4-0ba3-413a-9047-97d9d5ed3c9d", 00:04:11.312 "assigned_rate_limits": { 00:04:11.312 "rw_ios_per_sec": 0, 00:04:11.312 "rw_mbytes_per_sec": 0, 00:04:11.312 "r_mbytes_per_sec": 0, 00:04:11.312 "w_mbytes_per_sec": 0 00:04:11.312 }, 00:04:11.312 "claimed": false, 00:04:11.312 "zoned": false, 00:04:11.312 "supported_io_types": { 00:04:11.312 "read": true, 00:04:11.312 "write": true, 00:04:11.312 "unmap": true, 00:04:11.312 "flush": true, 00:04:11.312 "reset": true, 00:04:11.312 "nvme_admin": false, 00:04:11.312 "nvme_io": false, 00:04:11.312 "nvme_io_md": false, 00:04:11.312 "write_zeroes": true, 00:04:11.312 "zcopy": true, 00:04:11.312 "get_zone_info": false, 00:04:11.312 "zone_management": false, 00:04:11.312 "zone_append": false, 00:04:11.312 "compare": false, 00:04:11.312 "compare_and_write": false, 00:04:11.312 "abort": true, 00:04:11.312 "seek_hole": false, 00:04:11.312 "seek_data": false, 00:04:11.312 "copy": true, 00:04:11.312 "nvme_iov_md": false 00:04:11.312 }, 00:04:11.312 "memory_domains": [ 00:04:11.312 { 00:04:11.312 "dma_device_id": "system", 00:04:11.312 "dma_device_type": 1 00:04:11.312 }, 00:04:11.312 { 00:04:11.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.312 "dma_device_type": 2 00:04:11.312 } 00:04:11.312 ], 00:04:11.312 "driver_specific": {} 00:04:11.312 } 00:04:11.312 ]' 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.312 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.312 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 [2024-12-09 11:19:03.282391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.313 [2024-12-09 11:19:03.282425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.313 [2024-12-09 11:19:03.282438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb1c840 00:04:11.313 [2024-12-09 11:19:03.282446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.313 [2024-12-09 11:19:03.283820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.313 [2024-12-09 11:19:03.283841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.313 Passthru0 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.313 { 00:04:11.313 "name": "Malloc0", 00:04:11.313 "aliases": [ 00:04:11.313 "5a6506c4-0ba3-413a-9047-97d9d5ed3c9d" 00:04:11.313 ], 00:04:11.313 "product_name": "Malloc disk", 00:04:11.313 "block_size": 512, 00:04:11.313 "num_blocks": 16384, 00:04:11.313 "uuid": "5a6506c4-0ba3-413a-9047-97d9d5ed3c9d", 00:04:11.313 "assigned_rate_limits": { 00:04:11.313 "rw_ios_per_sec": 0, 00:04:11.313 "rw_mbytes_per_sec": 0, 00:04:11.313 "r_mbytes_per_sec": 0, 00:04:11.313 "w_mbytes_per_sec": 0 00:04:11.313 }, 00:04:11.313 "claimed": true, 00:04:11.313 "claim_type": "exclusive_write", 00:04:11.313 "zoned": false, 00:04:11.313 "supported_io_types": { 00:04:11.313 "read": true, 00:04:11.313 "write": true, 00:04:11.313 "unmap": true, 00:04:11.313 "flush": true, 00:04:11.313 "reset": true, 00:04:11.313 "nvme_admin": false, 00:04:11.313 "nvme_io": false, 00:04:11.313 "nvme_io_md": false, 00:04:11.313 "write_zeroes": true, 00:04:11.313 "zcopy": true, 00:04:11.313 "get_zone_info": false, 00:04:11.313 "zone_management": false, 00:04:11.313 "zone_append": false, 00:04:11.313 "compare": false, 00:04:11.313 "compare_and_write": false, 00:04:11.313 "abort": true, 00:04:11.313 "seek_hole": false, 00:04:11.313 "seek_data": false, 00:04:11.313 "copy": true, 00:04:11.313 "nvme_iov_md": false 00:04:11.313 }, 00:04:11.313 "memory_domains": [ 00:04:11.313 { 00:04:11.313 "dma_device_id": "system", 00:04:11.313 "dma_device_type": 1 00:04:11.313 }, 00:04:11.313 { 00:04:11.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.313 "dma_device_type": 2 00:04:11.313 } 00:04:11.313 ], 00:04:11.313 "driver_specific": {} 00:04:11.313 }, 00:04:11.313 { 00:04:11.313 "name": "Passthru0", 00:04:11.313 "aliases": [ 00:04:11.313 "779b3c97-d73d-562f-8a20-2211948bf0dc" 00:04:11.313 ], 00:04:11.313 "product_name": "passthru", 00:04:11.313 "block_size": 512, 00:04:11.313 "num_blocks": 16384, 00:04:11.313 "uuid": "779b3c97-d73d-562f-8a20-2211948bf0dc", 00:04:11.313 "assigned_rate_limits": { 00:04:11.313 "rw_ios_per_sec": 0, 00:04:11.313 "rw_mbytes_per_sec": 0, 00:04:11.313 "r_mbytes_per_sec": 0, 00:04:11.313 "w_mbytes_per_sec": 0 00:04:11.313 }, 00:04:11.313 "claimed": false, 00:04:11.313 "zoned": false, 00:04:11.313 "supported_io_types": { 00:04:11.313 "read": true, 00:04:11.313 "write": true, 00:04:11.313 "unmap": true, 00:04:11.313 "flush": true, 00:04:11.313 "reset": true, 00:04:11.313 "nvme_admin": false, 00:04:11.313 "nvme_io": false, 00:04:11.313 "nvme_io_md": false, 00:04:11.313 "write_zeroes": true, 00:04:11.313 "zcopy": true, 00:04:11.313 "get_zone_info": false, 00:04:11.313 "zone_management": false, 00:04:11.313 "zone_append": false, 00:04:11.313 "compare": false, 00:04:11.313 "compare_and_write": false, 00:04:11.313 "abort": true, 00:04:11.313 "seek_hole": false, 00:04:11.313 "seek_data": false, 00:04:11.313 "copy": true, 00:04:11.313 "nvme_iov_md": false 00:04:11.313 }, 00:04:11.313 "memory_domains": [ 00:04:11.313 { 00:04:11.313 "dma_device_id": "system", 00:04:11.313 "dma_device_type": 1 00:04:11.313 }, 00:04:11.313 { 00:04:11.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.313 "dma_device_type": 2 00:04:11.313 } 00:04:11.313 ], 00:04:11.313 "driver_specific": { 00:04:11.313 "passthru": { 00:04:11.313 "name": "Passthru0", 00:04:11.313 "base_bdev_name": "Malloc0" 00:04:11.313 } 00:04:11.313 } 00:04:11.313 } 00:04:11.313 ]' 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.313 11:19:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.313 00:04:11.313 real 0m0.263s 00:04:11.313 user 0m0.174s 00:04:11.313 sys 0m0.031s 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.313 11:19:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 ************************************ 00:04:11.313 END TEST rpc_integrity 00:04:11.313 ************************************ 00:04:11.313 11:19:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.313 11:19:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.313 11:19:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.313 11:19:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 ************************************ 00:04:11.575 START TEST rpc_plugins 00:04:11.575 ************************************ 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.575 { 00:04:11.575 "name": "Malloc1", 00:04:11.575 "aliases": [ 00:04:11.575 "5c94464a-278e-43d2-ab50-13f67f11a1fe" 00:04:11.575 ], 00:04:11.575 "product_name": "Malloc disk", 00:04:11.575 "block_size": 4096, 00:04:11.575 "num_blocks": 256, 00:04:11.575 "uuid": "5c94464a-278e-43d2-ab50-13f67f11a1fe", 00:04:11.575 "assigned_rate_limits": { 00:04:11.575 "rw_ios_per_sec": 0, 00:04:11.575 "rw_mbytes_per_sec": 0, 00:04:11.575 "r_mbytes_per_sec": 0, 00:04:11.575 "w_mbytes_per_sec": 0 00:04:11.575 }, 00:04:11.575 "claimed": false, 00:04:11.575 "zoned": false, 00:04:11.575 "supported_io_types": { 00:04:11.575 "read": true, 00:04:11.575 "write": true, 00:04:11.575 "unmap": true, 00:04:11.575 "flush": true, 00:04:11.575 "reset": true, 00:04:11.575 "nvme_admin": false, 00:04:11.575 "nvme_io": false, 00:04:11.575 "nvme_io_md": false, 00:04:11.575 "write_zeroes": true, 00:04:11.575 "zcopy": true, 00:04:11.575 "get_zone_info": false, 00:04:11.575 "zone_management": false, 00:04:11.575 "zone_append": false, 00:04:11.575 "compare": false, 00:04:11.575 "compare_and_write": false, 00:04:11.575 "abort": true, 00:04:11.575 "seek_hole": false, 00:04:11.575 "seek_data": false, 00:04:11.575 "copy": true, 00:04:11.575 "nvme_iov_md": false 00:04:11.575 }, 00:04:11.575 "memory_domains": [ 00:04:11.575 { 00:04:11.575 "dma_device_id": "system", 00:04:11.575 "dma_device_type": 1 00:04:11.575 }, 00:04:11.575 { 00:04:11.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.575 "dma_device_type": 2 00:04:11.575 } 00:04:11.575 ], 00:04:11.575 "driver_specific": {} 00:04:11.575 } 00:04:11.575 ]' 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.575 11:19:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.575 00:04:11.575 real 0m0.142s 00:04:11.575 user 0m0.091s 00:04:11.575 sys 0m0.013s 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 ************************************ 00:04:11.575 END TEST rpc_plugins 00:04:11.575 ************************************ 00:04:11.575 11:19:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.575 11:19:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.575 11:19:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.575 11:19:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 ************************************ 00:04:11.575 START TEST rpc_trace_cmd_test 00:04:11.575 ************************************ 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.575 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3281339", 00:04:11.575 "tpoint_group_mask": "0x8", 00:04:11.575 "iscsi_conn": { 00:04:11.575 "mask": "0x2", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "scsi": { 00:04:11.575 "mask": "0x4", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "bdev": { 00:04:11.575 "mask": "0x8", 00:04:11.575 "tpoint_mask": "0xffffffffffffffff" 00:04:11.575 }, 00:04:11.575 "nvmf_rdma": { 00:04:11.575 "mask": "0x10", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "nvmf_tcp": { 00:04:11.575 "mask": "0x20", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "ftl": { 00:04:11.575 "mask": "0x40", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "blobfs": { 00:04:11.575 "mask": "0x80", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "dsa": { 00:04:11.575 "mask": "0x200", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "thread": { 00:04:11.575 "mask": "0x400", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "nvme_pcie": { 00:04:11.575 "mask": "0x800", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "iaa": { 00:04:11.575 "mask": "0x1000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "nvme_tcp": { 00:04:11.575 "mask": "0x2000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "bdev_nvme": { 00:04:11.575 "mask": "0x4000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "sock": { 00:04:11.575 "mask": "0x8000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "blob": { 00:04:11.575 "mask": "0x10000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "bdev_raid": { 00:04:11.575 "mask": "0x20000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 }, 00:04:11.575 "scheduler": { 00:04:11.575 "mask": "0x40000", 00:04:11.575 "tpoint_mask": "0x0" 00:04:11.575 } 00:04:11.575 }' 00:04:11.575 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.836 00:04:11.836 real 0m0.245s 00:04:11.836 user 0m0.209s 00:04:11.836 sys 0m0.027s 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.836 11:19:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.836 ************************************ 00:04:11.836 END TEST rpc_trace_cmd_test 00:04:11.836 ************************************ 00:04:11.836 11:19:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.836 11:19:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.836 11:19:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.836 11:19:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.836 11:19:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.836 11:19:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.097 ************************************ 00:04:12.097 START TEST rpc_daemon_integrity 00:04:12.097 ************************************ 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.097 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.097 { 00:04:12.097 "name": "Malloc2", 00:04:12.097 "aliases": [ 00:04:12.097 "06924717-ff21-4d2b-8d2a-cbad28e09863" 00:04:12.097 ], 00:04:12.097 "product_name": "Malloc disk", 00:04:12.097 "block_size": 512, 00:04:12.097 "num_blocks": 16384, 00:04:12.097 "uuid": "06924717-ff21-4d2b-8d2a-cbad28e09863", 00:04:12.097 "assigned_rate_limits": { 00:04:12.097 "rw_ios_per_sec": 0, 00:04:12.097 "rw_mbytes_per_sec": 0, 00:04:12.097 "r_mbytes_per_sec": 0, 00:04:12.097 "w_mbytes_per_sec": 0 00:04:12.097 }, 00:04:12.097 "claimed": false, 00:04:12.097 "zoned": false, 00:04:12.097 "supported_io_types": { 00:04:12.097 "read": true, 00:04:12.097 "write": true, 00:04:12.097 "unmap": true, 00:04:12.098 "flush": true, 00:04:12.098 "reset": true, 00:04:12.098 "nvme_admin": false, 00:04:12.098 "nvme_io": false, 00:04:12.098 "nvme_io_md": false, 00:04:12.098 "write_zeroes": true, 00:04:12.098 "zcopy": true, 00:04:12.098 "get_zone_info": false, 00:04:12.098 "zone_management": false, 00:04:12.098 "zone_append": false, 00:04:12.098 "compare": false, 00:04:12.098 "compare_and_write": false, 00:04:12.098 "abort": true, 00:04:12.098 "seek_hole": false, 00:04:12.098 "seek_data": false, 00:04:12.098 "copy": true, 00:04:12.098 "nvme_iov_md": false 00:04:12.098 }, 00:04:12.098 "memory_domains": [ 00:04:12.098 { 00:04:12.098 "dma_device_id": "system", 00:04:12.098 "dma_device_type": 1 00:04:12.098 }, 00:04:12.098 { 00:04:12.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.098 "dma_device_type": 2 00:04:12.098 } 00:04:12.098 ], 00:04:12.098 "driver_specific": {} 00:04:12.098 } 00:04:12.098 ]' 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.098 [2024-12-09 11:19:04.164752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:12.098 [2024-12-09 11:19:04.164779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.098 [2024-12-09 11:19:04.164792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa6aff0 00:04:12.098 [2024-12-09 11:19:04.164799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.098 [2024-12-09 11:19:04.166057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.098 [2024-12-09 11:19:04.166077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.098 Passthru0 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.098 { 00:04:12.098 "name": "Malloc2", 00:04:12.098 "aliases": [ 00:04:12.098 "06924717-ff21-4d2b-8d2a-cbad28e09863" 00:04:12.098 ], 00:04:12.098 "product_name": "Malloc disk", 00:04:12.098 "block_size": 512, 00:04:12.098 "num_blocks": 16384, 00:04:12.098 "uuid": "06924717-ff21-4d2b-8d2a-cbad28e09863", 00:04:12.098 "assigned_rate_limits": { 00:04:12.098 "rw_ios_per_sec": 0, 00:04:12.098 "rw_mbytes_per_sec": 0, 00:04:12.098 "r_mbytes_per_sec": 0, 00:04:12.098 "w_mbytes_per_sec": 0 00:04:12.098 }, 00:04:12.098 "claimed": true, 00:04:12.098 "claim_type": "exclusive_write", 00:04:12.098 "zoned": false, 00:04:12.098 "supported_io_types": { 00:04:12.098 "read": true, 00:04:12.098 "write": true, 00:04:12.098 "unmap": true, 00:04:12.098 "flush": true, 00:04:12.098 "reset": true, 00:04:12.098 "nvme_admin": false, 00:04:12.098 "nvme_io": false, 00:04:12.098 "nvme_io_md": false, 00:04:12.098 "write_zeroes": true, 00:04:12.098 "zcopy": true, 00:04:12.098 "get_zone_info": false, 00:04:12.098 "zone_management": false, 00:04:12.098 "zone_append": false, 00:04:12.098 "compare": false, 00:04:12.098 "compare_and_write": false, 00:04:12.098 "abort": true, 00:04:12.098 "seek_hole": false, 00:04:12.098 "seek_data": false, 00:04:12.098 "copy": true, 00:04:12.098 "nvme_iov_md": false 00:04:12.098 }, 00:04:12.098 "memory_domains": [ 00:04:12.098 { 00:04:12.098 "dma_device_id": "system", 00:04:12.098 "dma_device_type": 1 00:04:12.098 }, 00:04:12.098 { 00:04:12.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.098 "dma_device_type": 2 00:04:12.098 } 00:04:12.098 ], 00:04:12.098 "driver_specific": {} 00:04:12.098 }, 00:04:12.098 { 00:04:12.098 "name": "Passthru0", 00:04:12.098 "aliases": [ 00:04:12.098 "54dd8d78-01e7-5e1d-b709-8eddd6502b45" 00:04:12.098 ], 00:04:12.098 "product_name": "passthru", 00:04:12.098 "block_size": 512, 00:04:12.098 "num_blocks": 16384, 00:04:12.098 "uuid": "54dd8d78-01e7-5e1d-b709-8eddd6502b45", 00:04:12.098 "assigned_rate_limits": { 00:04:12.098 "rw_ios_per_sec": 0, 00:04:12.098 "rw_mbytes_per_sec": 0, 00:04:12.098 "r_mbytes_per_sec": 0, 00:04:12.098 "w_mbytes_per_sec": 0 00:04:12.098 }, 00:04:12.098 "claimed": false, 00:04:12.098 "zoned": false, 00:04:12.098 "supported_io_types": { 00:04:12.098 "read": true, 00:04:12.098 "write": true, 00:04:12.098 "unmap": true, 00:04:12.098 "flush": true, 00:04:12.098 "reset": true, 00:04:12.098 "nvme_admin": false, 00:04:12.098 "nvme_io": false, 00:04:12.098 "nvme_io_md": false, 00:04:12.098 "write_zeroes": true, 00:04:12.098 "zcopy": true, 00:04:12.098 "get_zone_info": false, 00:04:12.098 "zone_management": false, 00:04:12.098 "zone_append": false, 00:04:12.098 "compare": false, 00:04:12.098 "compare_and_write": false, 00:04:12.098 "abort": true, 00:04:12.098 "seek_hole": false, 00:04:12.098 "seek_data": false, 00:04:12.098 "copy": true, 00:04:12.098 "nvme_iov_md": false 00:04:12.098 }, 00:04:12.098 "memory_domains": [ 00:04:12.098 { 00:04:12.098 "dma_device_id": "system", 00:04:12.098 "dma_device_type": 1 00:04:12.098 }, 00:04:12.098 { 00:04:12.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.098 "dma_device_type": 2 00:04:12.098 } 00:04:12.098 ], 00:04:12.098 "driver_specific": { 00:04:12.098 "passthru": { 00:04:12.098 "name": "Passthru0", 00:04:12.098 "base_bdev_name": "Malloc2" 00:04:12.098 } 00:04:12.098 } 00:04:12.098 } 00:04:12.098 ]' 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.098 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.359 00:04:12.359 real 0m0.279s 00:04:12.359 user 0m0.187s 00:04:12.359 sys 0m0.029s 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.359 11:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.359 ************************************ 00:04:12.359 END TEST rpc_daemon_integrity 00:04:12.359 ************************************ 00:04:12.359 11:19:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.359 11:19:04 rpc -- rpc/rpc.sh@84 -- # killprocess 3281339 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 3281339 ']' 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@958 -- # kill -0 3281339 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3281339 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3281339' 00:04:12.359 killing process with pid 3281339 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@973 -- # kill 3281339 00:04:12.359 11:19:04 rpc -- common/autotest_common.sh@978 -- # wait 3281339 00:04:12.621 00:04:12.621 real 0m2.562s 00:04:12.621 user 0m3.326s 00:04:12.621 sys 0m0.725s 00:04:12.621 11:19:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.621 11:19:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.621 ************************************ 00:04:12.621 END TEST rpc 00:04:12.621 ************************************ 00:04:12.621 11:19:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:12.621 11:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.621 11:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.621 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.621 ************************************ 00:04:12.621 START TEST skip_rpc 00:04:12.621 ************************************ 00:04:12.621 11:19:04 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:12.883 * Looking for test storage... 00:04:12.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.883 11:19:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.883 --rc genhtml_branch_coverage=1 00:04:12.883 --rc genhtml_function_coverage=1 00:04:12.883 --rc genhtml_legend=1 00:04:12.883 --rc geninfo_all_blocks=1 00:04:12.883 --rc geninfo_unexecuted_blocks=1 00:04:12.883 00:04:12.883 ' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.883 --rc genhtml_branch_coverage=1 00:04:12.883 --rc genhtml_function_coverage=1 00:04:12.883 --rc genhtml_legend=1 00:04:12.883 --rc geninfo_all_blocks=1 00:04:12.883 --rc geninfo_unexecuted_blocks=1 00:04:12.883 00:04:12.883 ' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.883 --rc genhtml_branch_coverage=1 00:04:12.883 --rc genhtml_function_coverage=1 00:04:12.883 --rc genhtml_legend=1 00:04:12.883 --rc geninfo_all_blocks=1 00:04:12.883 --rc geninfo_unexecuted_blocks=1 00:04:12.883 00:04:12.883 ' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.883 --rc genhtml_branch_coverage=1 00:04:12.883 --rc genhtml_function_coverage=1 00:04:12.883 --rc genhtml_legend=1 00:04:12.883 --rc geninfo_all_blocks=1 00:04:12.883 --rc geninfo_unexecuted_blocks=1 00:04:12.883 00:04:12.883 ' 00:04:12.883 11:19:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.883 11:19:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.883 11:19:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.883 11:19:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.883 ************************************ 00:04:12.883 START TEST skip_rpc 00:04:12.883 ************************************ 00:04:12.883 11:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:12.883 11:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3282187 00:04:12.883 11:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.883 11:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:12.883 11:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:12.883 [2024-12-09 11:19:04.993861] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:12.883 [2024-12-09 11:19:04.993910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282187 ] 00:04:13.144 [2024-12-09 11:19:05.065650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.144 [2024-12-09 11:19:05.102409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3282187 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3282187 ']' 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3282187 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.434 11:19:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282187 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282187' 00:04:18.434 killing process with pid 3282187 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3282187 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3282187 00:04:18.434 00:04:18.434 real 0m5.282s 00:04:18.434 user 0m5.093s 00:04:18.434 sys 0m0.237s 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.434 11:19:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.434 ************************************ 00:04:18.434 END TEST skip_rpc 00:04:18.434 ************************************ 00:04:18.434 11:19:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.434 11:19:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.434 11:19:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.434 11:19:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.434 ************************************ 00:04:18.434 START TEST skip_rpc_with_json 00:04:18.434 ************************************ 00:04:18.434 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:18.434 11:19:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.434 11:19:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3283223 00:04:18.434 11:19:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3283223 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3283223 ']' 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.435 11:19:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.435 [2024-12-09 11:19:10.359695] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:18.435 [2024-12-09 11:19:10.359744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283223 ] 00:04:18.435 [2024-12-09 11:19:10.430482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.435 [2024-12-09 11:19:10.466943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.007 [2024-12-09 11:19:11.118604] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:19.007 request: 00:04:19.007 { 00:04:19.007 "trtype": "tcp", 00:04:19.007 "method": "nvmf_get_transports", 00:04:19.007 "req_id": 1 00:04:19.007 } 00:04:19.007 Got JSON-RPC error response 00:04:19.007 response: 00:04:19.007 { 00:04:19.007 "code": -19, 00:04:19.007 "message": "No such device" 00:04:19.007 } 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.007 [2024-12-09 11:19:11.130731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.007 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.268 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.268 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.268 { 00:04:19.268 "subsystems": [ 00:04:19.268 { 00:04:19.268 "subsystem": "fsdev", 00:04:19.268 "config": [ 00:04:19.268 { 00:04:19.268 "method": "fsdev_set_opts", 00:04:19.268 "params": { 00:04:19.268 "fsdev_io_pool_size": 65535, 00:04:19.268 "fsdev_io_cache_size": 256 00:04:19.268 } 00:04:19.268 } 00:04:19.268 ] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "vfio_user_target", 00:04:19.268 "config": null 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "keyring", 00:04:19.268 "config": [] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "iobuf", 00:04:19.268 "config": [ 00:04:19.268 { 00:04:19.268 "method": "iobuf_set_options", 00:04:19.268 "params": { 00:04:19.268 "small_pool_count": 8192, 00:04:19.268 "large_pool_count": 1024, 00:04:19.268 "small_bufsize": 8192, 00:04:19.268 "large_bufsize": 135168, 00:04:19.268 "enable_numa": false 00:04:19.268 } 00:04:19.268 } 00:04:19.268 ] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "sock", 00:04:19.268 "config": [ 00:04:19.268 { 00:04:19.268 "method": "sock_set_default_impl", 00:04:19.268 "params": { 00:04:19.268 "impl_name": "posix" 00:04:19.268 } 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "method": "sock_impl_set_options", 00:04:19.268 "params": { 00:04:19.268 "impl_name": "ssl", 00:04:19.268 "recv_buf_size": 4096, 00:04:19.268 "send_buf_size": 4096, 00:04:19.268 "enable_recv_pipe": true, 00:04:19.268 "enable_quickack": false, 00:04:19.268 "enable_placement_id": 0, 00:04:19.268 "enable_zerocopy_send_server": true, 00:04:19.268 "enable_zerocopy_send_client": false, 00:04:19.268 "zerocopy_threshold": 0, 00:04:19.268 "tls_version": 0, 00:04:19.268 "enable_ktls": false 00:04:19.268 } 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "method": "sock_impl_set_options", 00:04:19.268 "params": { 00:04:19.268 "impl_name": "posix", 00:04:19.268 "recv_buf_size": 2097152, 00:04:19.268 "send_buf_size": 2097152, 00:04:19.268 "enable_recv_pipe": true, 00:04:19.268 "enable_quickack": false, 00:04:19.268 "enable_placement_id": 0, 00:04:19.268 "enable_zerocopy_send_server": true, 00:04:19.268 "enable_zerocopy_send_client": false, 00:04:19.268 "zerocopy_threshold": 0, 00:04:19.268 "tls_version": 0, 00:04:19.268 "enable_ktls": false 00:04:19.268 } 00:04:19.268 } 00:04:19.268 ] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "vmd", 00:04:19.268 "config": [] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "accel", 00:04:19.268 "config": [ 00:04:19.268 { 00:04:19.268 "method": "accel_set_options", 00:04:19.268 "params": { 00:04:19.268 "small_cache_size": 128, 00:04:19.268 "large_cache_size": 16, 00:04:19.268 "task_count": 2048, 00:04:19.268 "sequence_count": 2048, 00:04:19.268 "buf_count": 2048 00:04:19.268 } 00:04:19.268 } 00:04:19.268 ] 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "subsystem": "bdev", 00:04:19.268 "config": [ 00:04:19.268 { 00:04:19.268 "method": "bdev_set_options", 00:04:19.268 "params": { 00:04:19.268 "bdev_io_pool_size": 65535, 00:04:19.268 "bdev_io_cache_size": 256, 00:04:19.268 "bdev_auto_examine": true, 00:04:19.268 "iobuf_small_cache_size": 128, 00:04:19.268 "iobuf_large_cache_size": 16 00:04:19.268 } 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "method": "bdev_raid_set_options", 00:04:19.268 "params": { 00:04:19.268 "process_window_size_kb": 1024, 00:04:19.268 "process_max_bandwidth_mb_sec": 0 00:04:19.268 } 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "method": "bdev_iscsi_set_options", 00:04:19.268 "params": { 00:04:19.268 "timeout_sec": 30 00:04:19.268 } 00:04:19.268 }, 00:04:19.268 { 00:04:19.268 "method": "bdev_nvme_set_options", 00:04:19.268 "params": { 00:04:19.268 "action_on_timeout": "none", 00:04:19.268 "timeout_us": 0, 00:04:19.268 "timeout_admin_us": 0, 00:04:19.268 "keep_alive_timeout_ms": 10000, 00:04:19.268 "arbitration_burst": 0, 00:04:19.268 "low_priority_weight": 0, 00:04:19.268 "medium_priority_weight": 0, 00:04:19.268 "high_priority_weight": 0, 00:04:19.268 "nvme_adminq_poll_period_us": 10000, 00:04:19.268 "nvme_ioq_poll_period_us": 0, 00:04:19.268 "io_queue_requests": 0, 00:04:19.268 "delay_cmd_submit": true, 00:04:19.268 "transport_retry_count": 4, 00:04:19.268 "bdev_retry_count": 3, 00:04:19.268 "transport_ack_timeout": 0, 00:04:19.268 "ctrlr_loss_timeout_sec": 0, 00:04:19.268 "reconnect_delay_sec": 0, 00:04:19.268 "fast_io_fail_timeout_sec": 0, 00:04:19.268 "disable_auto_failback": false, 00:04:19.268 "generate_uuids": false, 00:04:19.268 "transport_tos": 0, 00:04:19.269 "nvme_error_stat": false, 00:04:19.269 "rdma_srq_size": 0, 00:04:19.269 "io_path_stat": false, 00:04:19.269 "allow_accel_sequence": false, 00:04:19.269 "rdma_max_cq_size": 0, 00:04:19.269 "rdma_cm_event_timeout_ms": 0, 00:04:19.269 "dhchap_digests": [ 00:04:19.269 "sha256", 00:04:19.269 "sha384", 00:04:19.269 "sha512" 00:04:19.269 ], 00:04:19.269 "dhchap_dhgroups": [ 00:04:19.269 "null", 00:04:19.269 "ffdhe2048", 00:04:19.269 "ffdhe3072", 00:04:19.269 "ffdhe4096", 00:04:19.269 "ffdhe6144", 00:04:19.269 "ffdhe8192" 00:04:19.269 ] 00:04:19.269 } 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "method": "bdev_nvme_set_hotplug", 00:04:19.269 "params": { 00:04:19.269 "period_us": 100000, 00:04:19.269 "enable": false 00:04:19.269 } 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "method": "bdev_wait_for_examine" 00:04:19.269 } 00:04:19.269 ] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "scsi", 00:04:19.269 "config": null 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "scheduler", 00:04:19.269 "config": [ 00:04:19.269 { 00:04:19.269 "method": "framework_set_scheduler", 00:04:19.269 "params": { 00:04:19.269 "name": "static" 00:04:19.269 } 00:04:19.269 } 00:04:19.269 ] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "vhost_scsi", 00:04:19.269 "config": [] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "vhost_blk", 00:04:19.269 "config": [] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "ublk", 00:04:19.269 "config": [] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "nbd", 00:04:19.269 "config": [] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "nvmf", 00:04:19.269 "config": [ 00:04:19.269 { 00:04:19.269 "method": "nvmf_set_config", 00:04:19.269 "params": { 00:04:19.269 "discovery_filter": "match_any", 00:04:19.269 "admin_cmd_passthru": { 00:04:19.269 "identify_ctrlr": false 00:04:19.269 }, 00:04:19.269 "dhchap_digests": [ 00:04:19.269 "sha256", 00:04:19.269 "sha384", 00:04:19.269 "sha512" 00:04:19.269 ], 00:04:19.269 "dhchap_dhgroups": [ 00:04:19.269 "null", 00:04:19.269 "ffdhe2048", 00:04:19.269 "ffdhe3072", 00:04:19.269 "ffdhe4096", 00:04:19.269 "ffdhe6144", 00:04:19.269 "ffdhe8192" 00:04:19.269 ] 00:04:19.269 } 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "method": "nvmf_set_max_subsystems", 00:04:19.269 "params": { 00:04:19.269 "max_subsystems": 1024 00:04:19.269 } 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "method": "nvmf_set_crdt", 00:04:19.269 "params": { 00:04:19.269 "crdt1": 0, 00:04:19.269 "crdt2": 0, 00:04:19.269 "crdt3": 0 00:04:19.269 } 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "method": "nvmf_create_transport", 00:04:19.269 "params": { 00:04:19.269 "trtype": "TCP", 00:04:19.269 "max_queue_depth": 128, 00:04:19.269 "max_io_qpairs_per_ctrlr": 127, 00:04:19.269 "in_capsule_data_size": 4096, 00:04:19.269 "max_io_size": 131072, 00:04:19.269 "io_unit_size": 131072, 00:04:19.269 "max_aq_depth": 128, 00:04:19.269 "num_shared_buffers": 511, 00:04:19.269 "buf_cache_size": 4294967295, 00:04:19.269 "dif_insert_or_strip": false, 00:04:19.269 "zcopy": false, 00:04:19.269 "c2h_success": true, 00:04:19.269 "sock_priority": 0, 00:04:19.269 "abort_timeout_sec": 1, 00:04:19.269 "ack_timeout": 0, 00:04:19.269 "data_wr_pool_size": 0 00:04:19.269 } 00:04:19.269 } 00:04:19.269 ] 00:04:19.269 }, 00:04:19.269 { 00:04:19.269 "subsystem": "iscsi", 00:04:19.269 "config": [ 00:04:19.269 { 00:04:19.269 "method": "iscsi_set_options", 00:04:19.269 "params": { 00:04:19.269 "node_base": "iqn.2016-06.io.spdk", 00:04:19.269 "max_sessions": 128, 00:04:19.269 "max_connections_per_session": 2, 00:04:19.269 "max_queue_depth": 64, 00:04:19.269 "default_time2wait": 2, 00:04:19.269 "default_time2retain": 20, 00:04:19.269 "first_burst_length": 8192, 00:04:19.269 "immediate_data": true, 00:04:19.269 "allow_duplicated_isid": false, 00:04:19.269 "error_recovery_level": 0, 00:04:19.269 "nop_timeout": 60, 00:04:19.269 "nop_in_interval": 30, 00:04:19.269 "disable_chap": false, 00:04:19.269 "require_chap": false, 00:04:19.269 "mutual_chap": false, 00:04:19.269 "chap_group": 0, 00:04:19.269 "max_large_datain_per_connection": 64, 00:04:19.269 "max_r2t_per_connection": 4, 00:04:19.269 "pdu_pool_size": 36864, 00:04:19.269 "immediate_data_pool_size": 16384, 00:04:19.269 "data_out_pool_size": 2048 00:04:19.269 } 00:04:19.269 } 00:04:19.269 ] 00:04:19.269 } 00:04:19.269 ] 00:04:19.269 } 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3283223 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3283223 ']' 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3283223 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283223 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283223' 00:04:19.269 killing process with pid 3283223 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3283223 00:04:19.269 11:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3283223 00:04:19.530 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3283548 00:04:19.530 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.530 11:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3283548 ']' 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283548' 00:04:24.820 killing process with pid 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3283548 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.820 00:04:24.820 real 0m6.565s 00:04:24.820 user 0m6.466s 00:04:24.820 sys 0m0.536s 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.820 ************************************ 00:04:24.820 END TEST skip_rpc_with_json 00:04:24.820 ************************************ 00:04:24.820 11:19:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.820 11:19:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.820 11:19:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.820 11:19:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.820 ************************************ 00:04:24.820 START TEST skip_rpc_with_delay 00:04:24.820 ************************************ 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:24.820 11:19:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.082 [2024-12-09 11:19:16.995918] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.082 00:04:25.082 real 0m0.077s 00:04:25.082 user 0m0.052s 00:04:25.082 sys 0m0.025s 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.082 11:19:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.082 ************************************ 00:04:25.082 END TEST skip_rpc_with_delay 00:04:25.082 ************************************ 00:04:25.082 11:19:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.082 11:19:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.082 11:19:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.082 11:19:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.082 11:19:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.082 11:19:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.082 ************************************ 00:04:25.082 START TEST exit_on_failed_rpc_init 00:04:25.082 ************************************ 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3284635 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3284635 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3284635 ']' 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.082 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.082 [2024-12-09 11:19:17.146018] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:25.082 [2024-12-09 11:19:17.146073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284635 ] 00:04:25.082 [2024-12-09 11:19:17.220036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.343 [2024-12-09 11:19:17.257335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:25.915 11:19:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.915 [2024-12-09 11:19:18.007821] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:25.915 [2024-12-09 11:19:18.007873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284822 ] 00:04:26.176 [2024-12-09 11:19:18.095624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.176 [2024-12-09 11:19:18.131547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.176 [2024-12-09 11:19:18.131601] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:26.176 [2024-12-09 11:19:18.131611] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:26.176 [2024-12-09 11:19:18.131617] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3284635 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3284635 ']' 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3284635 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284635 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284635' 00:04:26.176 killing process with pid 3284635 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3284635 00:04:26.176 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3284635 00:04:26.437 00:04:26.437 real 0m1.353s 00:04:26.437 user 0m1.614s 00:04:26.437 sys 0m0.360s 00:04:26.437 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.437 11:19:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.437 ************************************ 00:04:26.437 END TEST exit_on_failed_rpc_init 00:04:26.437 ************************************ 00:04:26.437 11:19:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.437 00:04:26.437 real 0m13.784s 00:04:26.437 user 0m13.457s 00:04:26.437 sys 0m1.459s 00:04:26.437 11:19:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.437 11:19:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.437 ************************************ 00:04:26.437 END TEST skip_rpc 00:04:26.437 ************************************ 00:04:26.437 11:19:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.437 11:19:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.437 11:19:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.437 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.437 ************************************ 00:04:26.437 START TEST rpc_client 00:04:26.437 ************************************ 00:04:26.437 11:19:18 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.698 * Looking for test storage... 00:04:26.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.698 11:19:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.698 --rc genhtml_branch_coverage=1 00:04:26.698 --rc genhtml_function_coverage=1 00:04:26.698 --rc genhtml_legend=1 00:04:26.698 --rc geninfo_all_blocks=1 00:04:26.698 --rc geninfo_unexecuted_blocks=1 00:04:26.698 00:04:26.698 ' 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.698 --rc genhtml_branch_coverage=1 00:04:26.698 --rc genhtml_function_coverage=1 00:04:26.698 --rc genhtml_legend=1 00:04:26.698 --rc geninfo_all_blocks=1 00:04:26.698 --rc geninfo_unexecuted_blocks=1 00:04:26.698 00:04:26.698 ' 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.698 --rc genhtml_branch_coverage=1 00:04:26.698 --rc genhtml_function_coverage=1 00:04:26.698 --rc genhtml_legend=1 00:04:26.698 --rc geninfo_all_blocks=1 00:04:26.698 --rc geninfo_unexecuted_blocks=1 00:04:26.698 00:04:26.698 ' 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.698 --rc genhtml_branch_coverage=1 00:04:26.698 --rc genhtml_function_coverage=1 00:04:26.698 --rc genhtml_legend=1 00:04:26.698 --rc geninfo_all_blocks=1 00:04:26.698 --rc geninfo_unexecuted_blocks=1 00:04:26.698 00:04:26.698 ' 00:04:26.698 11:19:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:26.698 OK 00:04:26.698 11:19:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.698 00:04:26.698 real 0m0.217s 00:04:26.698 user 0m0.129s 00:04:26.698 sys 0m0.095s 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.698 11:19:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:26.698 ************************************ 00:04:26.698 END TEST rpc_client 00:04:26.698 ************************************ 00:04:26.698 11:19:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:26.698 11:19:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.698 11:19:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.698 11:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.698 ************************************ 00:04:26.698 START TEST json_config 00:04:26.698 ************************************ 00:04:26.698 11:19:18 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:26.961 11:19:18 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.961 11:19:18 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.961 11:19:18 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.961 11:19:19 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.961 11:19:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.961 11:19:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.961 11:19:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.961 11:19:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.961 11:19:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.961 11:19:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.961 11:19:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.961 11:19:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.961 11:19:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.961 11:19:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:26.961 11:19:19 json_config -- scripts/common.sh@345 -- # : 1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.961 11:19:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.961 11:19:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@353 -- # local d=1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.961 11:19:19 json_config -- scripts/common.sh@355 -- # echo 1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.961 11:19:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:26.962 11:19:19 json_config -- scripts/common.sh@353 -- # local d=2 00:04:26.962 11:19:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.962 11:19:19 json_config -- scripts/common.sh@355 -- # echo 2 00:04:26.962 11:19:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.962 11:19:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.962 11:19:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.962 11:19:19 json_config -- scripts/common.sh@368 -- # return 0 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.962 --rc genhtml_branch_coverage=1 00:04:26.962 --rc genhtml_function_coverage=1 00:04:26.962 --rc genhtml_legend=1 00:04:26.962 --rc geninfo_all_blocks=1 00:04:26.962 --rc geninfo_unexecuted_blocks=1 00:04:26.962 00:04:26.962 ' 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.962 --rc genhtml_branch_coverage=1 00:04:26.962 --rc genhtml_function_coverage=1 00:04:26.962 --rc genhtml_legend=1 00:04:26.962 --rc geninfo_all_blocks=1 00:04:26.962 --rc geninfo_unexecuted_blocks=1 00:04:26.962 00:04:26.962 ' 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.962 --rc genhtml_branch_coverage=1 00:04:26.962 --rc genhtml_function_coverage=1 00:04:26.962 --rc genhtml_legend=1 00:04:26.962 --rc geninfo_all_blocks=1 00:04:26.962 --rc geninfo_unexecuted_blocks=1 00:04:26.962 00:04:26.962 ' 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.962 --rc genhtml_branch_coverage=1 00:04:26.962 --rc genhtml_function_coverage=1 00:04:26.962 --rc genhtml_legend=1 00:04:26.962 --rc geninfo_all_blocks=1 00:04:26.962 --rc geninfo_unexecuted_blocks=1 00:04:26.962 00:04:26.962 ' 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.962 11:19:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.962 11:19:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.962 11:19:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.962 11:19:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.962 11:19:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.962 11:19:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.962 11:19:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.962 11:19:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.962 11:19:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@51 -- # : 0 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.962 11:19:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:26.962 INFO: JSON configuration test init 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.962 11:19:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:26.962 11:19:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.962 11:19:19 json_config -- json_config/common.sh@10 -- # shift 00:04:26.962 11:19:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.962 11:19:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.962 11:19:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.962 11:19:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.962 11:19:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.962 11:19:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3285111 00:04:26.962 11:19:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.962 Waiting for target to run... 00:04:26.962 11:19:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3285111 /var/tmp/spdk_tgt.sock 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 3285111 ']' 00:04:26.962 11:19:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.962 11:19:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.224 [2024-12-09 11:19:19.146203] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:27.224 [2024-12-09 11:19:19.146278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285111 ] 00:04:27.486 [2024-12-09 11:19:19.476298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.486 [2024-12-09 11:19:19.510245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:28.059 11:19:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.059 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.059 11:19:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.059 11:19:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:28.059 11:19:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:28.632 11:19:20 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:28.632 11:19:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:28.632 11:19:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.632 11:19:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:28.633 11:19:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@54 -- # sort 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:28.633 11:19:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.633 11:19:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:28.633 11:19:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.633 11:19:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:28.633 11:19:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:28.633 11:19:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:28.894 MallocForNvmf0 00:04:28.894 11:19:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:28.894 11:19:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.155 MallocForNvmf1 00:04:29.155 11:19:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.155 11:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.155 [2024-12-09 11:19:21.290114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.416 11:19:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.416 11:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.416 11:19:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.416 11:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.677 11:19:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.677 11:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.938 11:19:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.938 11:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.938 [2024-12-09 11:19:22.016430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.938 11:19:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:29.938 11:19:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.938 11:19:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.938 11:19:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:29.938 11:19:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.938 11:19:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.199 11:19:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:30.199 11:19:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.199 11:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.199 MallocBdevForConfigChangeCheck 00:04:30.199 11:19:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:30.199 11:19:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.199 11:19:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.199 11:19:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:30.199 11:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.773 11:19:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:30.773 INFO: shutting down applications... 00:04:30.773 11:19:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:30.773 11:19:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:30.773 11:19:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:30.773 11:19:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:31.035 Calling clear_iscsi_subsystem 00:04:31.035 Calling clear_nvmf_subsystem 00:04:31.035 Calling clear_nbd_subsystem 00:04:31.035 Calling clear_ublk_subsystem 00:04:31.035 Calling clear_vhost_blk_subsystem 00:04:31.035 Calling clear_vhost_scsi_subsystem 00:04:31.035 Calling clear_bdev_subsystem 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.035 11:19:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:31.295 11:19:23 json_config -- json_config/json_config.sh@352 -- # break 00:04:31.295 11:19:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:31.295 11:19:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:31.295 11:19:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.295 11:19:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.295 11:19:23 json_config -- json_config/common.sh@35 -- # [[ -n 3285111 ]] 00:04:31.295 11:19:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3285111 00:04:31.295 11:19:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.295 11:19:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.295 11:19:23 json_config -- json_config/common.sh@41 -- # kill -0 3285111 00:04:31.295 11:19:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.867 11:19:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.867 11:19:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.867 11:19:23 json_config -- json_config/common.sh@41 -- # kill -0 3285111 00:04:31.867 11:19:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.867 11:19:23 json_config -- json_config/common.sh@43 -- # break 00:04:31.867 11:19:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.867 11:19:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.867 SPDK target shutdown done 00:04:31.867 11:19:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:31.867 INFO: relaunching applications... 00:04:31.867 11:19:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.867 11:19:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.867 11:19:23 json_config -- json_config/common.sh@10 -- # shift 00:04:31.867 11:19:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.867 11:19:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.867 11:19:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.867 11:19:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.867 11:19:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.867 11:19:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3286243 00:04:31.867 11:19:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.867 Waiting for target to run... 00:04:31.867 11:19:23 json_config -- json_config/common.sh@25 -- # waitforlisten 3286243 /var/tmp/spdk_tgt.sock 00:04:31.867 11:19:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 3286243 ']' 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.867 11:19:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.867 [2024-12-09 11:19:23.991753] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:31.867 [2024-12-09 11:19:23.991808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286243 ] 00:04:32.128 [2024-12-09 11:19:24.280869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.389 [2024-12-09 11:19:24.310483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.964 [2024-12-09 11:19:24.833210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.964 [2024-12-09 11:19:24.865580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:32.964 11:19:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.964 11:19:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:32.964 11:19:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.964 00:04:32.964 11:19:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:32.964 11:19:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:32.964 INFO: Checking if target configuration is the same... 00:04:32.964 11:19:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.964 11:19:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:32.964 11:19:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.964 + '[' 2 -ne 2 ']' 00:04:32.964 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.964 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:32.964 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.964 +++ basename /dev/fd/62 00:04:32.964 ++ mktemp /tmp/62.XXX 00:04:32.964 + tmp_file_1=/tmp/62.BTR 00:04:32.964 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.964 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.964 + tmp_file_2=/tmp/spdk_tgt_config.json.Sb4 00:04:32.964 + ret=0 00:04:32.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.223 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.223 + diff -u /tmp/62.BTR /tmp/spdk_tgt_config.json.Sb4 00:04:33.223 + echo 'INFO: JSON config files are the same' 00:04:33.223 INFO: JSON config files are the same 00:04:33.223 + rm /tmp/62.BTR /tmp/spdk_tgt_config.json.Sb4 00:04:33.223 + exit 0 00:04:33.223 11:19:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:33.223 11:19:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:33.223 INFO: changing configuration and checking if this can be detected... 00:04:33.223 11:19:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.223 11:19:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.483 11:19:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.483 11:19:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:33.483 11:19:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.483 + '[' 2 -ne 2 ']' 00:04:33.483 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:33.483 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:33.483 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:33.483 +++ basename /dev/fd/62 00:04:33.483 ++ mktemp /tmp/62.XXX 00:04:33.483 + tmp_file_1=/tmp/62.RUg 00:04:33.483 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.483 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.483 + tmp_file_2=/tmp/spdk_tgt_config.json.8aw 00:04:33.483 + ret=0 00:04:33.483 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.742 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.742 + diff -u /tmp/62.RUg /tmp/spdk_tgt_config.json.8aw 00:04:33.742 + ret=1 00:04:33.742 + echo '=== Start of file: /tmp/62.RUg ===' 00:04:33.742 + cat /tmp/62.RUg 00:04:33.742 + echo '=== End of file: /tmp/62.RUg ===' 00:04:33.742 + echo '' 00:04:33.742 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8aw ===' 00:04:33.742 + cat /tmp/spdk_tgt_config.json.8aw 00:04:33.742 + echo '=== End of file: /tmp/spdk_tgt_config.json.8aw ===' 00:04:33.742 + echo '' 00:04:33.742 + rm /tmp/62.RUg /tmp/spdk_tgt_config.json.8aw 00:04:33.742 + exit 1 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:33.742 INFO: configuration change detected. 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 3286243 ]] 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.742 11:19:25 json_config -- json_config/json_config.sh@330 -- # killprocess 3286243 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@954 -- # '[' -z 3286243 ']' 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@958 -- # kill -0 3286243 00:04:33.742 11:19:25 json_config -- common/autotest_common.sh@959 -- # uname 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3286243 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3286243' 00:04:34.002 killing process with pid 3286243 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@973 -- # kill 3286243 00:04:34.002 11:19:25 json_config -- common/autotest_common.sh@978 -- # wait 3286243 00:04:34.263 11:19:26 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.263 11:19:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:34.263 11:19:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.263 11:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.263 11:19:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:34.263 11:19:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:34.263 INFO: Success 00:04:34.263 00:04:34.263 real 0m7.434s 00:04:34.263 user 0m8.988s 00:04:34.263 sys 0m1.960s 00:04:34.263 11:19:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.263 11:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.263 ************************************ 00:04:34.263 END TEST json_config 00:04:34.263 ************************************ 00:04:34.263 11:19:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.263 11:19:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.263 11:19:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.263 11:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:34.263 ************************************ 00:04:34.263 START TEST json_config_extra_key 00:04:34.263 ************************************ 00:04:34.263 11:19:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.534 11:19:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.534 11:19:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.534 11:19:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.534 11:19:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.534 11:19:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.534 11:19:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.534 11:19:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:34.535 11:19:26 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.535 11:19:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.535 --rc genhtml_branch_coverage=1 00:04:34.535 --rc genhtml_function_coverage=1 00:04:34.535 --rc genhtml_legend=1 00:04:34.535 --rc geninfo_all_blocks=1 00:04:34.535 --rc geninfo_unexecuted_blocks=1 00:04:34.535 00:04:34.535 ' 00:04:34.535 11:19:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.535 --rc genhtml_branch_coverage=1 00:04:34.535 --rc genhtml_function_coverage=1 00:04:34.535 --rc genhtml_legend=1 00:04:34.535 --rc geninfo_all_blocks=1 00:04:34.535 --rc geninfo_unexecuted_blocks=1 00:04:34.535 00:04:34.535 ' 00:04:34.535 11:19:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.535 --rc genhtml_branch_coverage=1 00:04:34.535 --rc genhtml_function_coverage=1 00:04:34.535 --rc genhtml_legend=1 00:04:34.535 --rc geninfo_all_blocks=1 00:04:34.535 --rc geninfo_unexecuted_blocks=1 00:04:34.535 00:04:34.535 ' 00:04:34.535 11:19:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.535 --rc genhtml_branch_coverage=1 00:04:34.535 --rc genhtml_function_coverage=1 00:04:34.535 --rc genhtml_legend=1 00:04:34.535 --rc geninfo_all_blocks=1 00:04:34.535 --rc geninfo_unexecuted_blocks=1 00:04:34.535 00:04:34.535 ' 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.535 11:19:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.535 11:19:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.535 11:19:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.535 11:19:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.535 11:19:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:34.535 11:19:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.535 11:19:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:34.535 INFO: launching applications... 00:04:34.535 11:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.535 11:19:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:34.535 11:19:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3286991 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.536 Waiting for target to run... 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3286991 /var/tmp/spdk_tgt.sock 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3286991 ']' 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.536 11:19:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.536 11:19:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.536 [2024-12-09 11:19:26.643436] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:34.536 [2024-12-09 11:19:26.643512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286991 ] 00:04:35.108 [2024-12-09 11:19:26.968871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.108 [2024-12-09 11:19:27.001228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.370 11:19:27 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.370 11:19:27 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:35.370 00:04:35.370 11:19:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:35.370 INFO: shutting down applications... 00:04:35.370 11:19:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3286991 ]] 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3286991 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3286991 00:04:35.370 11:19:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3286991 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.942 11:19:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.942 SPDK target shutdown done 00:04:35.942 11:19:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:35.942 Success 00:04:35.942 00:04:35.942 real 0m1.584s 00:04:35.942 user 0m1.180s 00:04:35.942 sys 0m0.461s 00:04:35.942 11:19:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.942 11:19:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 END TEST json_config_extra_key 00:04:35.942 ************************************ 00:04:35.942 11:19:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.942 11:19:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.942 11:19:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.942 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.942 ************************************ 00:04:35.942 START TEST alias_rpc 00:04:35.942 ************************************ 00:04:35.942 11:19:28 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.204 * Looking for test storage... 00:04:36.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.204 11:19:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.204 --rc genhtml_branch_coverage=1 00:04:36.204 --rc genhtml_function_coverage=1 00:04:36.204 --rc genhtml_legend=1 00:04:36.204 --rc geninfo_all_blocks=1 00:04:36.204 --rc geninfo_unexecuted_blocks=1 00:04:36.204 00:04:36.204 ' 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.204 --rc genhtml_branch_coverage=1 00:04:36.204 --rc genhtml_function_coverage=1 00:04:36.204 --rc genhtml_legend=1 00:04:36.204 --rc geninfo_all_blocks=1 00:04:36.204 --rc geninfo_unexecuted_blocks=1 00:04:36.204 00:04:36.204 ' 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.204 --rc genhtml_branch_coverage=1 00:04:36.204 --rc genhtml_function_coverage=1 00:04:36.204 --rc genhtml_legend=1 00:04:36.204 --rc geninfo_all_blocks=1 00:04:36.204 --rc geninfo_unexecuted_blocks=1 00:04:36.204 00:04:36.204 ' 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.204 --rc genhtml_branch_coverage=1 00:04:36.204 --rc genhtml_function_coverage=1 00:04:36.204 --rc genhtml_legend=1 00:04:36.204 --rc geninfo_all_blocks=1 00:04:36.204 --rc geninfo_unexecuted_blocks=1 00:04:36.204 00:04:36.204 ' 00:04:36.204 11:19:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.204 11:19:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3287362 00:04:36.204 11:19:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3287362 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3287362 ']' 00:04:36.204 11:19:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.204 11:19:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.205 [2024-12-09 11:19:28.278756] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:36.205 [2024-12-09 11:19:28.278824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287362 ] 00:04:36.205 [2024-12-09 11:19:28.356904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.466 [2024-12-09 11:19:28.398775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.037 11:19:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.037 11:19:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.037 11:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:37.298 11:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3287362 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3287362 ']' 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3287362 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287362 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287362' 00:04:37.298 killing process with pid 3287362 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 3287362 00:04:37.298 11:19:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 3287362 00:04:37.559 00:04:37.559 real 0m1.535s 00:04:37.559 user 0m1.708s 00:04:37.559 sys 0m0.409s 00:04:37.559 11:19:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.559 11:19:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 ************************************ 00:04:37.559 END TEST alias_rpc 00:04:37.559 ************************************ 00:04:37.559 11:19:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:37.559 11:19:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.559 11:19:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.559 11:19:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.559 11:19:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 ************************************ 00:04:37.559 START TEST spdkcli_tcp 00:04:37.559 ************************************ 00:04:37.559 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.821 * Looking for test storage... 00:04:37.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.821 11:19:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.821 --rc genhtml_branch_coverage=1 00:04:37.821 --rc genhtml_function_coverage=1 00:04:37.821 --rc genhtml_legend=1 00:04:37.821 --rc geninfo_all_blocks=1 00:04:37.821 --rc geninfo_unexecuted_blocks=1 00:04:37.821 00:04:37.821 ' 00:04:37.821 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.821 --rc genhtml_branch_coverage=1 00:04:37.821 --rc genhtml_function_coverage=1 00:04:37.822 --rc genhtml_legend=1 00:04:37.822 --rc geninfo_all_blocks=1 00:04:37.822 --rc geninfo_unexecuted_blocks=1 00:04:37.822 00:04:37.822 ' 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.822 --rc genhtml_branch_coverage=1 00:04:37.822 --rc genhtml_function_coverage=1 00:04:37.822 --rc genhtml_legend=1 00:04:37.822 --rc geninfo_all_blocks=1 00:04:37.822 --rc geninfo_unexecuted_blocks=1 00:04:37.822 00:04:37.822 ' 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.822 --rc genhtml_branch_coverage=1 00:04:37.822 --rc genhtml_function_coverage=1 00:04:37.822 --rc genhtml_legend=1 00:04:37.822 --rc geninfo_all_blocks=1 00:04:37.822 --rc geninfo_unexecuted_blocks=1 00:04:37.822 00:04:37.822 ' 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3287720 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3287720 00:04:37.822 11:19:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3287720 ']' 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.822 11:19:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.822 [2024-12-09 11:19:29.894844] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:37.822 [2024-12-09 11:19:29.894924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287720 ] 00:04:37.822 [2024-12-09 11:19:29.977335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.082 [2024-12-09 11:19:30.021941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.082 [2024-12-09 11:19:30.021944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.662 11:19:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.662 11:19:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:38.662 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3287838 00:04:38.662 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.662 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.924 [ 00:04:38.924 "bdev_malloc_delete", 00:04:38.924 "bdev_malloc_create", 00:04:38.924 "bdev_null_resize", 00:04:38.924 "bdev_null_delete", 00:04:38.924 "bdev_null_create", 00:04:38.924 "bdev_nvme_cuse_unregister", 00:04:38.924 "bdev_nvme_cuse_register", 00:04:38.924 "bdev_opal_new_user", 00:04:38.924 "bdev_opal_set_lock_state", 00:04:38.924 "bdev_opal_delete", 00:04:38.924 "bdev_opal_get_info", 00:04:38.924 "bdev_opal_create", 00:04:38.924 "bdev_nvme_opal_revert", 00:04:38.924 "bdev_nvme_opal_init", 00:04:38.924 "bdev_nvme_send_cmd", 00:04:38.924 "bdev_nvme_set_keys", 00:04:38.924 "bdev_nvme_get_path_iostat", 00:04:38.924 "bdev_nvme_get_mdns_discovery_info", 00:04:38.924 "bdev_nvme_stop_mdns_discovery", 00:04:38.924 "bdev_nvme_start_mdns_discovery", 00:04:38.924 "bdev_nvme_set_multipath_policy", 00:04:38.924 "bdev_nvme_set_preferred_path", 00:04:38.924 "bdev_nvme_get_io_paths", 00:04:38.924 "bdev_nvme_remove_error_injection", 00:04:38.924 "bdev_nvme_add_error_injection", 00:04:38.924 "bdev_nvme_get_discovery_info", 00:04:38.924 "bdev_nvme_stop_discovery", 00:04:38.924 "bdev_nvme_start_discovery", 00:04:38.924 "bdev_nvme_get_controller_health_info", 00:04:38.924 "bdev_nvme_disable_controller", 00:04:38.924 "bdev_nvme_enable_controller", 00:04:38.924 "bdev_nvme_reset_controller", 00:04:38.924 "bdev_nvme_get_transport_statistics", 00:04:38.924 "bdev_nvme_apply_firmware", 00:04:38.924 "bdev_nvme_detach_controller", 00:04:38.924 "bdev_nvme_get_controllers", 00:04:38.924 "bdev_nvme_attach_controller", 00:04:38.924 "bdev_nvme_set_hotplug", 00:04:38.924 "bdev_nvme_set_options", 00:04:38.924 "bdev_passthru_delete", 00:04:38.924 "bdev_passthru_create", 00:04:38.924 "bdev_lvol_set_parent_bdev", 00:04:38.925 "bdev_lvol_set_parent", 00:04:38.925 "bdev_lvol_check_shallow_copy", 00:04:38.925 "bdev_lvol_start_shallow_copy", 00:04:38.925 "bdev_lvol_grow_lvstore", 00:04:38.925 "bdev_lvol_get_lvols", 00:04:38.925 "bdev_lvol_get_lvstores", 00:04:38.925 "bdev_lvol_delete", 00:04:38.925 "bdev_lvol_set_read_only", 00:04:38.925 "bdev_lvol_resize", 00:04:38.925 "bdev_lvol_decouple_parent", 00:04:38.925 "bdev_lvol_inflate", 00:04:38.925 "bdev_lvol_rename", 00:04:38.925 "bdev_lvol_clone_bdev", 00:04:38.925 "bdev_lvol_clone", 00:04:38.925 "bdev_lvol_snapshot", 00:04:38.925 "bdev_lvol_create", 00:04:38.925 "bdev_lvol_delete_lvstore", 00:04:38.925 "bdev_lvol_rename_lvstore", 00:04:38.925 "bdev_lvol_create_lvstore", 00:04:38.925 "bdev_raid_set_options", 00:04:38.925 "bdev_raid_remove_base_bdev", 00:04:38.925 "bdev_raid_add_base_bdev", 00:04:38.925 "bdev_raid_delete", 00:04:38.925 "bdev_raid_create", 00:04:38.925 "bdev_raid_get_bdevs", 00:04:38.925 "bdev_error_inject_error", 00:04:38.925 "bdev_error_delete", 00:04:38.925 "bdev_error_create", 00:04:38.925 "bdev_split_delete", 00:04:38.925 "bdev_split_create", 00:04:38.925 "bdev_delay_delete", 00:04:38.925 "bdev_delay_create", 00:04:38.925 "bdev_delay_update_latency", 00:04:38.925 "bdev_zone_block_delete", 00:04:38.925 "bdev_zone_block_create", 00:04:38.925 "blobfs_create", 00:04:38.925 "blobfs_detect", 00:04:38.925 "blobfs_set_cache_size", 00:04:38.925 "bdev_aio_delete", 00:04:38.925 "bdev_aio_rescan", 00:04:38.925 "bdev_aio_create", 00:04:38.925 "bdev_ftl_set_property", 00:04:38.925 "bdev_ftl_get_properties", 00:04:38.925 "bdev_ftl_get_stats", 00:04:38.925 "bdev_ftl_unmap", 00:04:38.925 "bdev_ftl_unload", 00:04:38.925 "bdev_ftl_delete", 00:04:38.925 "bdev_ftl_load", 00:04:38.925 "bdev_ftl_create", 00:04:38.925 "bdev_virtio_attach_controller", 00:04:38.925 "bdev_virtio_scsi_get_devices", 00:04:38.925 "bdev_virtio_detach_controller", 00:04:38.925 "bdev_virtio_blk_set_hotplug", 00:04:38.925 "bdev_iscsi_delete", 00:04:38.925 "bdev_iscsi_create", 00:04:38.925 "bdev_iscsi_set_options", 00:04:38.925 "accel_error_inject_error", 00:04:38.925 "ioat_scan_accel_module", 00:04:38.925 "dsa_scan_accel_module", 00:04:38.925 "iaa_scan_accel_module", 00:04:38.925 "vfu_virtio_create_fs_endpoint", 00:04:38.925 "vfu_virtio_create_scsi_endpoint", 00:04:38.925 "vfu_virtio_scsi_remove_target", 00:04:38.925 "vfu_virtio_scsi_add_target", 00:04:38.925 "vfu_virtio_create_blk_endpoint", 00:04:38.925 "vfu_virtio_delete_endpoint", 00:04:38.925 "keyring_file_remove_key", 00:04:38.925 "keyring_file_add_key", 00:04:38.925 "keyring_linux_set_options", 00:04:38.925 "fsdev_aio_delete", 00:04:38.925 "fsdev_aio_create", 00:04:38.925 "iscsi_get_histogram", 00:04:38.925 "iscsi_enable_histogram", 00:04:38.925 "iscsi_set_options", 00:04:38.925 "iscsi_get_auth_groups", 00:04:38.925 "iscsi_auth_group_remove_secret", 00:04:38.925 "iscsi_auth_group_add_secret", 00:04:38.925 "iscsi_delete_auth_group", 00:04:38.925 "iscsi_create_auth_group", 00:04:38.925 "iscsi_set_discovery_auth", 00:04:38.925 "iscsi_get_options", 00:04:38.925 "iscsi_target_node_request_logout", 00:04:38.925 "iscsi_target_node_set_redirect", 00:04:38.925 "iscsi_target_node_set_auth", 00:04:38.925 "iscsi_target_node_add_lun", 00:04:38.925 "iscsi_get_stats", 00:04:38.925 "iscsi_get_connections", 00:04:38.925 "iscsi_portal_group_set_auth", 00:04:38.925 "iscsi_start_portal_group", 00:04:38.925 "iscsi_delete_portal_group", 00:04:38.925 "iscsi_create_portal_group", 00:04:38.925 "iscsi_get_portal_groups", 00:04:38.925 "iscsi_delete_target_node", 00:04:38.925 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.925 "iscsi_target_node_add_pg_ig_maps", 00:04:38.925 "iscsi_create_target_node", 00:04:38.925 "iscsi_get_target_nodes", 00:04:38.925 "iscsi_delete_initiator_group", 00:04:38.925 "iscsi_initiator_group_remove_initiators", 00:04:38.925 "iscsi_initiator_group_add_initiators", 00:04:38.925 "iscsi_create_initiator_group", 00:04:38.925 "iscsi_get_initiator_groups", 00:04:38.925 "nvmf_set_crdt", 00:04:38.925 "nvmf_set_config", 00:04:38.925 "nvmf_set_max_subsystems", 00:04:38.925 "nvmf_stop_mdns_prr", 00:04:38.925 "nvmf_publish_mdns_prr", 00:04:38.925 "nvmf_subsystem_get_listeners", 00:04:38.925 "nvmf_subsystem_get_qpairs", 00:04:38.925 "nvmf_subsystem_get_controllers", 00:04:38.925 "nvmf_get_stats", 00:04:38.925 "nvmf_get_transports", 00:04:38.925 "nvmf_create_transport", 00:04:38.925 "nvmf_get_targets", 00:04:38.925 "nvmf_delete_target", 00:04:38.925 "nvmf_create_target", 00:04:38.925 "nvmf_subsystem_allow_any_host", 00:04:38.925 "nvmf_subsystem_set_keys", 00:04:38.925 "nvmf_subsystem_remove_host", 00:04:38.925 "nvmf_subsystem_add_host", 00:04:38.925 "nvmf_ns_remove_host", 00:04:38.925 "nvmf_ns_add_host", 00:04:38.925 "nvmf_subsystem_remove_ns", 00:04:38.925 "nvmf_subsystem_set_ns_ana_group", 00:04:38.925 "nvmf_subsystem_add_ns", 00:04:38.925 "nvmf_subsystem_listener_set_ana_state", 00:04:38.925 "nvmf_discovery_get_referrals", 00:04:38.925 "nvmf_discovery_remove_referral", 00:04:38.925 "nvmf_discovery_add_referral", 00:04:38.925 "nvmf_subsystem_remove_listener", 00:04:38.925 "nvmf_subsystem_add_listener", 00:04:38.925 "nvmf_delete_subsystem", 00:04:38.925 "nvmf_create_subsystem", 00:04:38.925 "nvmf_get_subsystems", 00:04:38.925 "env_dpdk_get_mem_stats", 00:04:38.925 "nbd_get_disks", 00:04:38.925 "nbd_stop_disk", 00:04:38.925 "nbd_start_disk", 00:04:38.925 "ublk_recover_disk", 00:04:38.925 "ublk_get_disks", 00:04:38.925 "ublk_stop_disk", 00:04:38.925 "ublk_start_disk", 00:04:38.925 "ublk_destroy_target", 00:04:38.925 "ublk_create_target", 00:04:38.925 "virtio_blk_create_transport", 00:04:38.925 "virtio_blk_get_transports", 00:04:38.925 "vhost_controller_set_coalescing", 00:04:38.925 "vhost_get_controllers", 00:04:38.925 "vhost_delete_controller", 00:04:38.925 "vhost_create_blk_controller", 00:04:38.925 "vhost_scsi_controller_remove_target", 00:04:38.925 "vhost_scsi_controller_add_target", 00:04:38.925 "vhost_start_scsi_controller", 00:04:38.925 "vhost_create_scsi_controller", 00:04:38.925 "thread_set_cpumask", 00:04:38.925 "scheduler_set_options", 00:04:38.925 "framework_get_governor", 00:04:38.925 "framework_get_scheduler", 00:04:38.925 "framework_set_scheduler", 00:04:38.925 "framework_get_reactors", 00:04:38.925 "thread_get_io_channels", 00:04:38.925 "thread_get_pollers", 00:04:38.925 "thread_get_stats", 00:04:38.925 "framework_monitor_context_switch", 00:04:38.925 "spdk_kill_instance", 00:04:38.925 "log_enable_timestamps", 00:04:38.925 "log_get_flags", 00:04:38.925 "log_clear_flag", 00:04:38.925 "log_set_flag", 00:04:38.925 "log_get_level", 00:04:38.925 "log_set_level", 00:04:38.925 "log_get_print_level", 00:04:38.925 "log_set_print_level", 00:04:38.925 "framework_enable_cpumask_locks", 00:04:38.925 "framework_disable_cpumask_locks", 00:04:38.925 "framework_wait_init", 00:04:38.925 "framework_start_init", 00:04:38.925 "scsi_get_devices", 00:04:38.925 "bdev_get_histogram", 00:04:38.925 "bdev_enable_histogram", 00:04:38.925 "bdev_set_qos_limit", 00:04:38.925 "bdev_set_qd_sampling_period", 00:04:38.925 "bdev_get_bdevs", 00:04:38.925 "bdev_reset_iostat", 00:04:38.925 "bdev_get_iostat", 00:04:38.925 "bdev_examine", 00:04:38.925 "bdev_wait_for_examine", 00:04:38.925 "bdev_set_options", 00:04:38.925 "accel_get_stats", 00:04:38.925 "accel_set_options", 00:04:38.925 "accel_set_driver", 00:04:38.925 "accel_crypto_key_destroy", 00:04:38.925 "accel_crypto_keys_get", 00:04:38.925 "accel_crypto_key_create", 00:04:38.925 "accel_assign_opc", 00:04:38.925 "accel_get_module_info", 00:04:38.925 "accel_get_opc_assignments", 00:04:38.925 "vmd_rescan", 00:04:38.925 "vmd_remove_device", 00:04:38.925 "vmd_enable", 00:04:38.925 "sock_get_default_impl", 00:04:38.925 "sock_set_default_impl", 00:04:38.926 "sock_impl_set_options", 00:04:38.926 "sock_impl_get_options", 00:04:38.926 "iobuf_get_stats", 00:04:38.926 "iobuf_set_options", 00:04:38.926 "keyring_get_keys", 00:04:38.926 "vfu_tgt_set_base_path", 00:04:38.926 "framework_get_pci_devices", 00:04:38.926 "framework_get_config", 00:04:38.926 "framework_get_subsystems", 00:04:38.926 "fsdev_set_opts", 00:04:38.926 "fsdev_get_opts", 00:04:38.926 "trace_get_info", 00:04:38.926 "trace_get_tpoint_group_mask", 00:04:38.926 "trace_disable_tpoint_group", 00:04:38.926 "trace_enable_tpoint_group", 00:04:38.926 "trace_clear_tpoint_mask", 00:04:38.926 "trace_set_tpoint_mask", 00:04:38.926 "notify_get_notifications", 00:04:38.926 "notify_get_types", 00:04:38.926 "spdk_get_version", 00:04:38.926 "rpc_get_methods" 00:04:38.926 ] 00:04:38.926 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.926 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.926 11:19:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3287720 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3287720 ']' 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3287720 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287720 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287720' 00:04:38.926 killing process with pid 3287720 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3287720 00:04:38.926 11:19:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3287720 00:04:39.187 00:04:39.187 real 0m1.533s 00:04:39.187 user 0m2.770s 00:04:39.187 sys 0m0.458s 00:04:39.187 11:19:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.188 11:19:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.188 ************************************ 00:04:39.188 END TEST spdkcli_tcp 00:04:39.188 ************************************ 00:04:39.188 11:19:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.188 11:19:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.188 11:19:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.188 11:19:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.188 ************************************ 00:04:39.188 START TEST dpdk_mem_utility 00:04:39.188 ************************************ 00:04:39.188 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.188 * Looking for test storage... 00:04:39.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:39.188 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.188 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.188 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.450 11:19:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.450 --rc genhtml_branch_coverage=1 00:04:39.450 --rc genhtml_function_coverage=1 00:04:39.450 --rc genhtml_legend=1 00:04:39.450 --rc geninfo_all_blocks=1 00:04:39.450 --rc geninfo_unexecuted_blocks=1 00:04:39.450 00:04:39.450 ' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.450 --rc genhtml_branch_coverage=1 00:04:39.450 --rc genhtml_function_coverage=1 00:04:39.450 --rc genhtml_legend=1 00:04:39.450 --rc geninfo_all_blocks=1 00:04:39.450 --rc geninfo_unexecuted_blocks=1 00:04:39.450 00:04:39.450 ' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.450 --rc genhtml_branch_coverage=1 00:04:39.450 --rc genhtml_function_coverage=1 00:04:39.450 --rc genhtml_legend=1 00:04:39.450 --rc geninfo_all_blocks=1 00:04:39.450 --rc geninfo_unexecuted_blocks=1 00:04:39.450 00:04:39.450 ' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:39.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.450 --rc genhtml_branch_coverage=1 00:04:39.450 --rc genhtml_function_coverage=1 00:04:39.450 --rc genhtml_legend=1 00:04:39.450 --rc geninfo_all_blocks=1 00:04:39.450 --rc geninfo_unexecuted_blocks=1 00:04:39.450 00:04:39.450 ' 00:04:39.450 11:19:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:39.450 11:19:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3288092 00:04:39.450 11:19:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3288092 00:04:39.450 11:19:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3288092 ']' 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.450 11:19:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.450 [2024-12-09 11:19:31.496072] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:39.450 [2024-12-09 11:19:31.496145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288092 ] 00:04:39.450 [2024-12-09 11:19:31.574777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.711 [2024-12-09 11:19:31.616852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.284 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.284 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:40.284 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.284 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.284 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.284 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.284 { 00:04:40.284 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.284 } 00:04:40.284 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.284 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.284 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:40.284 1 heaps totaling size 818.000000 MiB 00:04:40.284 size: 818.000000 MiB heap id: 0 00:04:40.284 end heaps---------- 00:04:40.284 9 mempools totaling size 603.782043 MiB 00:04:40.284 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.284 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.284 size: 100.555481 MiB name: bdev_io_3288092 00:04:40.284 size: 50.003479 MiB name: msgpool_3288092 00:04:40.284 size: 36.509338 MiB name: fsdev_io_3288092 00:04:40.284 size: 21.763794 MiB name: PDU_Pool 00:04:40.284 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.284 size: 4.133484 MiB name: evtpool_3288092 00:04:40.284 size: 0.026123 MiB name: Session_Pool 00:04:40.284 end mempools------- 00:04:40.284 6 memzones totaling size 4.142822 MiB 00:04:40.284 size: 1.000366 MiB name: RG_ring_0_3288092 00:04:40.284 size: 1.000366 MiB name: RG_ring_1_3288092 00:04:40.284 size: 1.000366 MiB name: RG_ring_4_3288092 00:04:40.284 size: 1.000366 MiB name: RG_ring_5_3288092 00:04:40.284 size: 0.125366 MiB name: RG_ring_2_3288092 00:04:40.284 size: 0.015991 MiB name: RG_ring_3_3288092 00:04:40.284 end memzones------- 00:04:40.284 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.284 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:40.284 list of free elements. size: 10.852478 MiB 00:04:40.284 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:40.284 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:40.284 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:40.284 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:40.284 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:40.284 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:40.284 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:40.284 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:40.284 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:40.284 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:40.284 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:40.284 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:40.284 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:40.284 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:40.284 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:40.284 list of standard malloc elements. size: 199.218628 MiB 00:04:40.284 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:40.284 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:40.284 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:40.284 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:40.284 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:40.284 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:40.284 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:40.284 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:40.284 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:40.284 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:40.284 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:40.284 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:40.284 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:40.285 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:40.285 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:40.285 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:40.285 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:40.285 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:40.285 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:40.285 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:40.285 list of memzone associated elements. size: 607.928894 MiB 00:04:40.285 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:40.285 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.285 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:40.285 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.285 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:40.285 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3288092_0 00:04:40.285 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:40.285 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3288092_0 00:04:40.285 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:40.285 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3288092_0 00:04:40.285 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:40.285 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.285 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:40.285 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.285 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:40.285 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3288092_0 00:04:40.285 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:40.285 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3288092 00:04:40.285 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:40.285 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3288092 00:04:40.285 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:40.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.285 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:40.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.285 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:40.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.285 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:40.285 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.285 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:40.285 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3288092 00:04:40.285 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:40.285 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3288092 00:04:40.285 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:40.285 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3288092 00:04:40.285 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:40.285 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3288092 00:04:40.285 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:40.285 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3288092 00:04:40.285 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:40.285 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3288092 00:04:40.285 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:40.285 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.285 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:40.285 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.285 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:40.285 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.285 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:40.285 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3288092 00:04:40.285 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:40.285 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3288092 00:04:40.285 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:40.285 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.285 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:40.285 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.285 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:40.285 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3288092 00:04:40.285 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:40.285 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.285 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:40.285 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3288092 00:04:40.285 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:40.285 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3288092 00:04:40.285 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:40.285 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3288092 00:04:40.285 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:40.285 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.285 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.285 11:19:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3288092 00:04:40.285 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3288092 ']' 00:04:40.285 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3288092 00:04:40.285 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:40.285 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.285 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3288092 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3288092' 00:04:40.546 killing process with pid 3288092 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3288092 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3288092 00:04:40.546 00:04:40.546 real 0m1.446s 00:04:40.546 user 0m1.502s 00:04:40.546 sys 0m0.454s 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.546 11:19:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.546 ************************************ 00:04:40.546 END TEST dpdk_mem_utility 00:04:40.546 ************************************ 00:04:40.808 11:19:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.808 11:19:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.808 11:19:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.808 11:19:32 -- common/autotest_common.sh@10 -- # set +x 00:04:40.808 ************************************ 00:04:40.808 START TEST event 00:04:40.808 ************************************ 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.808 * Looking for test storage... 00:04:40.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.808 11:19:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.808 11:19:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.808 11:19:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.808 11:19:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.808 11:19:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.808 11:19:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.808 11:19:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.808 11:19:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.808 11:19:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.808 11:19:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.808 11:19:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.808 11:19:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:40.808 11:19:32 event -- scripts/common.sh@345 -- # : 1 00:04:40.808 11:19:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.808 11:19:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.808 11:19:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:40.808 11:19:32 event -- scripts/common.sh@353 -- # local d=1 00:04:40.808 11:19:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.808 11:19:32 event -- scripts/common.sh@355 -- # echo 1 00:04:40.808 11:19:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.808 11:19:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:40.808 11:19:32 event -- scripts/common.sh@353 -- # local d=2 00:04:40.808 11:19:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.808 11:19:32 event -- scripts/common.sh@355 -- # echo 2 00:04:40.808 11:19:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.808 11:19:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.808 11:19:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.808 11:19:32 event -- scripts/common.sh@368 -- # return 0 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.808 --rc genhtml_branch_coverage=1 00:04:40.808 --rc genhtml_function_coverage=1 00:04:40.808 --rc genhtml_legend=1 00:04:40.808 --rc geninfo_all_blocks=1 00:04:40.808 --rc geninfo_unexecuted_blocks=1 00:04:40.808 00:04:40.808 ' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.808 --rc genhtml_branch_coverage=1 00:04:40.808 --rc genhtml_function_coverage=1 00:04:40.808 --rc genhtml_legend=1 00:04:40.808 --rc geninfo_all_blocks=1 00:04:40.808 --rc geninfo_unexecuted_blocks=1 00:04:40.808 00:04:40.808 ' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.808 --rc genhtml_branch_coverage=1 00:04:40.808 --rc genhtml_function_coverage=1 00:04:40.808 --rc genhtml_legend=1 00:04:40.808 --rc geninfo_all_blocks=1 00:04:40.808 --rc geninfo_unexecuted_blocks=1 00:04:40.808 00:04:40.808 ' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.808 --rc genhtml_branch_coverage=1 00:04:40.808 --rc genhtml_function_coverage=1 00:04:40.808 --rc genhtml_legend=1 00:04:40.808 --rc geninfo_all_blocks=1 00:04:40.808 --rc geninfo_unexecuted_blocks=1 00:04:40.808 00:04:40.808 ' 00:04:40.808 11:19:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:40.808 11:19:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.808 11:19:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:40.808 11:19:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.808 11:19:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.069 ************************************ 00:04:41.069 START TEST event_perf 00:04:41.069 ************************************ 00:04:41.069 11:19:32 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.070 Running I/O for 1 seconds...[2024-12-09 11:19:33.007660] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:41.070 [2024-12-09 11:19:33.007730] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288398 ] 00:04:41.070 [2024-12-09 11:19:33.087132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.070 [2024-12-09 11:19:33.130532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.070 [2024-12-09 11:19:33.130646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.070 [2024-12-09 11:19:33.130801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.070 Running I/O for 1 seconds...[2024-12-09 11:19:33.130801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.012 00:04:42.012 lcore 0: 177661 00:04:42.012 lcore 1: 177661 00:04:42.012 lcore 2: 177659 00:04:42.012 lcore 3: 177663 00:04:42.012 done. 00:04:42.012 00:04:42.012 real 0m1.178s 00:04:42.012 user 0m4.101s 00:04:42.012 sys 0m0.074s 00:04:42.012 11:19:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.012 11:19:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.012 ************************************ 00:04:42.012 END TEST event_perf 00:04:42.012 ************************************ 00:04:42.273 11:19:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.273 11:19:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:42.273 11:19:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.273 11:19:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.273 ************************************ 00:04:42.273 START TEST event_reactor 00:04:42.273 ************************************ 00:04:42.273 11:19:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.273 [2024-12-09 11:19:34.256735] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:42.273 [2024-12-09 11:19:34.256828] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288673 ] 00:04:42.273 [2024-12-09 11:19:34.332796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.273 [2024-12-09 11:19:34.367126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.657 test_start 00:04:43.657 oneshot 00:04:43.657 tick 100 00:04:43.657 tick 100 00:04:43.657 tick 250 00:04:43.657 tick 100 00:04:43.657 tick 100 00:04:43.657 tick 100 00:04:43.657 tick 250 00:04:43.657 tick 500 00:04:43.657 tick 100 00:04:43.657 tick 100 00:04:43.657 tick 250 00:04:43.657 tick 100 00:04:43.657 tick 100 00:04:43.657 test_end 00:04:43.657 00:04:43.657 real 0m1.164s 00:04:43.657 user 0m1.100s 00:04:43.657 sys 0m0.060s 00:04:43.657 11:19:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.657 11:19:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 ************************************ 00:04:43.657 END TEST event_reactor 00:04:43.657 ************************************ 00:04:43.657 11:19:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.657 11:19:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:43.657 11:19:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.657 11:19:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 ************************************ 00:04:43.657 START TEST event_reactor_perf 00:04:43.657 ************************************ 00:04:43.657 11:19:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.657 [2024-12-09 11:19:35.493225] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:43.657 [2024-12-09 11:19:35.493324] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289021 ] 00:04:43.657 [2024-12-09 11:19:35.571463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.657 [2024-12-09 11:19:35.608468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.599 test_start 00:04:44.599 test_end 00:04:44.599 Performance: 369976 events per second 00:04:44.599 00:04:44.599 real 0m1.168s 00:04:44.599 user 0m1.094s 00:04:44.599 sys 0m0.070s 00:04:44.599 11:19:36 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.599 11:19:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.599 ************************************ 00:04:44.599 END TEST event_reactor_perf 00:04:44.599 ************************************ 00:04:44.599 11:19:36 event -- event/event.sh@49 -- # uname -s 00:04:44.599 11:19:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.599 11:19:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.599 11:19:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.599 11:19:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.599 11:19:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.599 ************************************ 00:04:44.599 START TEST event_scheduler 00:04:44.599 ************************************ 00:04:44.599 11:19:36 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.860 * Looking for test storage... 00:04:44.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:44.860 11:19:36 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.860 11:19:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.860 11:19:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.860 11:19:36 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.860 11:19:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.861 11:19:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.861 --rc genhtml_branch_coverage=1 00:04:44.861 --rc genhtml_function_coverage=1 00:04:44.861 --rc genhtml_legend=1 00:04:44.861 --rc geninfo_all_blocks=1 00:04:44.861 --rc geninfo_unexecuted_blocks=1 00:04:44.861 00:04:44.861 ' 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.861 --rc genhtml_branch_coverage=1 00:04:44.861 --rc genhtml_function_coverage=1 00:04:44.861 --rc genhtml_legend=1 00:04:44.861 --rc geninfo_all_blocks=1 00:04:44.861 --rc geninfo_unexecuted_blocks=1 00:04:44.861 00:04:44.861 ' 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.861 --rc genhtml_branch_coverage=1 00:04:44.861 --rc genhtml_function_coverage=1 00:04:44.861 --rc genhtml_legend=1 00:04:44.861 --rc geninfo_all_blocks=1 00:04:44.861 --rc geninfo_unexecuted_blocks=1 00:04:44.861 00:04:44.861 ' 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.861 --rc genhtml_branch_coverage=1 00:04:44.861 --rc genhtml_function_coverage=1 00:04:44.861 --rc genhtml_legend=1 00:04:44.861 --rc geninfo_all_blocks=1 00:04:44.861 --rc geninfo_unexecuted_blocks=1 00:04:44.861 00:04:44.861 ' 00:04:44.861 11:19:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.861 11:19:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3289414 00:04:44.861 11:19:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.861 11:19:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.861 11:19:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3289414 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3289414 ']' 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.861 11:19:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.861 [2024-12-09 11:19:36.964034] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:44.861 [2024-12-09 11:19:36.964090] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289414 ] 00:04:45.122 [2024-12-09 11:19:37.025193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.122 [2024-12-09 11:19:37.059236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.122 [2024-12-09 11:19:37.059392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.122 [2024-12-09 11:19:37.059544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.122 [2024-12-09 11:19:37.059546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.122 11:19:37 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.122 11:19:37 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:45.122 11:19:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.122 11:19:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.122 11:19:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.122 [2024-12-09 11:19:37.091939] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:45.122 [2024-12-09 11:19:37.091953] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.122 [2024-12-09 11:19:37.091961] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.122 [2024-12-09 11:19:37.091965] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.122 [2024-12-09 11:19:37.091969] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.122 11:19:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 [2024-12-09 11:19:37.149003] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 ************************************ 00:04:45.123 START TEST scheduler_create_thread 00:04:45.123 ************************************ 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 2 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 3 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 4 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 5 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 6 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 7 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.123 8 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.123 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.384 9 00:04:45.384 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.384 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.384 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.384 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.645 10 00:04:45.645 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.645 11:19:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.645 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.645 11:19:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.029 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.029 11:19:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:47.029 11:19:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:47.029 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.029 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.972 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.972 11:19:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.972 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.972 11:19:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.543 11:19:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.543 11:19:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.543 11:19:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.543 11:19:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.543 11:19:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.486 11:19:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.486 00:04:49.486 real 0m4.224s 00:04:49.486 user 0m0.026s 00:04:49.486 sys 0m0.005s 00:04:49.486 11:19:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.486 11:19:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.486 ************************************ 00:04:49.486 END TEST scheduler_create_thread 00:04:49.486 ************************************ 00:04:49.486 11:19:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.486 11:19:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3289414 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3289414 ']' 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3289414 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3289414 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3289414' 00:04:49.486 killing process with pid 3289414 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3289414 00:04:49.486 11:19:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3289414 00:04:49.748 [2024-12-09 11:19:41.690293] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.748 00:04:49.748 real 0m5.133s 00:04:49.748 user 0m10.159s 00:04:49.748 sys 0m0.372s 00:04:49.748 11:19:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.748 11:19:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.748 ************************************ 00:04:49.748 END TEST event_scheduler 00:04:49.748 ************************************ 00:04:49.748 11:19:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.748 11:19:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.748 11:19:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.748 11:19:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.748 11:19:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.010 ************************************ 00:04:50.010 START TEST app_repeat 00:04:50.010 ************************************ 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3290473 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3290473' 00:04:50.010 Process app_repeat pid: 3290473 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:50.010 spdk_app_start Round 0 00:04:50.010 11:19:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3290473 /var/tmp/spdk-nbd.sock 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3290473 ']' 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.010 11:19:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.010 [2024-12-09 11:19:41.973026] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:04:50.010 [2024-12-09 11:19:41.973096] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290473 ] 00:04:50.010 [2024-12-09 11:19:42.049706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.010 [2024-12-09 11:19:42.091709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.010 [2024-12-09 11:19:42.091713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.272 11:19:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.272 11:19:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.272 11:19:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.272 Malloc0 00:04:50.272 11:19:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.533 Malloc1 00:04:50.533 11:19:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.533 11:19:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.795 /dev/nbd0 00:04:50.795 11:19:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.795 11:19:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.795 1+0 records in 00:04:50.795 1+0 records out 00:04:50.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269044 s, 15.2 MB/s 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.795 11:19:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.795 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.795 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.795 11:19:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.795 /dev/nbd1 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.058 1+0 records in 00:04:51.058 1+0 records out 00:04:51.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279174 s, 14.7 MB/s 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.058 11:19:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.058 11:19:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.058 { 00:04:51.058 "nbd_device": "/dev/nbd0", 00:04:51.058 "bdev_name": "Malloc0" 00:04:51.058 }, 00:04:51.058 { 00:04:51.058 "nbd_device": "/dev/nbd1", 00:04:51.058 "bdev_name": "Malloc1" 00:04:51.058 } 00:04:51.058 ]' 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.058 { 00:04:51.058 "nbd_device": "/dev/nbd0", 00:04:51.058 "bdev_name": "Malloc0" 00:04:51.058 }, 00:04:51.058 { 00:04:51.058 "nbd_device": "/dev/nbd1", 00:04:51.058 "bdev_name": "Malloc1" 00:04:51.058 } 00:04:51.058 ]' 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.058 /dev/nbd1' 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.058 /dev/nbd1' 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.058 11:19:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.320 256+0 records in 00:04:51.320 256+0 records out 00:04:51.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124855 s, 84.0 MB/s 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.320 256+0 records in 00:04:51.320 256+0 records out 00:04:51.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158544 s, 66.1 MB/s 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.320 256+0 records in 00:04:51.320 256+0 records out 00:04:51.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176728 s, 59.3 MB/s 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.320 11:19:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.582 11:19:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.843 11:19:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.843 11:19:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.104 11:19:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.104 [2024-12-09 11:19:44.190338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.104 [2024-12-09 11:19:44.226948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.104 [2024-12-09 11:19:44.226951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.104 [2024-12-09 11:19:44.258935] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.104 [2024-12-09 11:19:44.258973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.406 11:19:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.406 11:19:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.406 spdk_app_start Round 1 00:04:55.406 11:19:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3290473 /var/tmp/spdk-nbd.sock 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3290473 ']' 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.406 11:19:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.406 11:19:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.406 Malloc0 00:04:55.406 11:19:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.666 Malloc1 00:04:55.666 11:19:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.666 /dev/nbd0 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.666 1+0 records in 00:04:55.666 1+0 records out 00:04:55.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214378 s, 19.1 MB/s 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.666 11:19:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.666 11:19:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.927 /dev/nbd1 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.927 1+0 records in 00:04:55.927 1+0 records out 00:04:55.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238042 s, 17.2 MB/s 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.927 11:19:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.927 11:19:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.189 { 00:04:56.189 "nbd_device": "/dev/nbd0", 00:04:56.189 "bdev_name": "Malloc0" 00:04:56.189 }, 00:04:56.189 { 00:04:56.189 "nbd_device": "/dev/nbd1", 00:04:56.189 "bdev_name": "Malloc1" 00:04:56.189 } 00:04:56.189 ]' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.189 { 00:04:56.189 "nbd_device": "/dev/nbd0", 00:04:56.189 "bdev_name": "Malloc0" 00:04:56.189 }, 00:04:56.189 { 00:04:56.189 "nbd_device": "/dev/nbd1", 00:04:56.189 "bdev_name": "Malloc1" 00:04:56.189 } 00:04:56.189 ]' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.189 /dev/nbd1' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.189 /dev/nbd1' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.189 256+0 records in 00:04:56.189 256+0 records out 00:04:56.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012121 s, 86.5 MB/s 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.189 256+0 records in 00:04:56.189 256+0 records out 00:04:56.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170463 s, 61.5 MB/s 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.189 256+0 records in 00:04:56.189 256+0 records out 00:04:56.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175976 s, 59.6 MB/s 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.189 11:19:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.450 11:19:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.711 11:19:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.972 11:19:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.972 11:19:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.972 11:19:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.232 [2024-12-09 11:19:49.213446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.232 [2024-12-09 11:19:49.249834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.232 [2024-12-09 11:19:49.249836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.233 [2024-12-09 11:19:49.282587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.233 [2024-12-09 11:19:49.282621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.534 11:19:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.534 11:19:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.534 spdk_app_start Round 2 00:05:00.534 11:19:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3290473 /var/tmp/spdk-nbd.sock 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3290473 ']' 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.534 11:19:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.534 11:19:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.534 Malloc0 00:05:00.534 11:19:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.534 Malloc1 00:05:00.535 11:19:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.535 11:19:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.795 /dev/nbd0 00:05:00.795 11:19:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.796 11:19:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.796 1+0 records in 00:05:00.796 1+0 records out 00:05:00.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243486 s, 16.8 MB/s 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.796 11:19:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.796 11:19:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.796 11:19:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.796 11:19:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.055 /dev/nbd1 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.055 1+0 records in 00:05:01.055 1+0 records out 00:05:01.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021079 s, 19.4 MB/s 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.055 11:19:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.055 11:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.316 11:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.316 { 00:05:01.316 "nbd_device": "/dev/nbd0", 00:05:01.317 "bdev_name": "Malloc0" 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "nbd_device": "/dev/nbd1", 00:05:01.317 "bdev_name": "Malloc1" 00:05:01.317 } 00:05:01.317 ]' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.317 { 00:05:01.317 "nbd_device": "/dev/nbd0", 00:05:01.317 "bdev_name": "Malloc0" 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "nbd_device": "/dev/nbd1", 00:05:01.317 "bdev_name": "Malloc1" 00:05:01.317 } 00:05:01.317 ]' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.317 /dev/nbd1' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.317 /dev/nbd1' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.317 256+0 records in 00:05:01.317 256+0 records out 00:05:01.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117087 s, 89.6 MB/s 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.317 256+0 records in 00:05:01.317 256+0 records out 00:05:01.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186188 s, 56.3 MB/s 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.317 256+0 records in 00:05:01.317 256+0 records out 00:05:01.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180424 s, 58.1 MB/s 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.317 11:19:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.579 11:19:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.841 11:19:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.841 11:19:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.103 11:19:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.364 [2024-12-09 11:19:54.286292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.364 [2024-12-09 11:19:54.322442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.364 [2024-12-09 11:19:54.322445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.364 [2024-12-09 11:19:54.354154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.364 [2024-12-09 11:19:54.354189] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.672 11:19:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3290473 /var/tmp/spdk-nbd.sock 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3290473 ']' 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.672 11:19:57 event.app_repeat -- event/event.sh@39 -- # killprocess 3290473 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3290473 ']' 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3290473 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3290473 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3290473' 00:05:05.672 killing process with pid 3290473 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3290473 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3290473 00:05:05.672 spdk_app_start is called in Round 0. 00:05:05.672 Shutdown signal received, stop current app iteration 00:05:05.672 Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 reinitialization... 00:05:05.672 spdk_app_start is called in Round 1. 00:05:05.672 Shutdown signal received, stop current app iteration 00:05:05.672 Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 reinitialization... 00:05:05.672 spdk_app_start is called in Round 2. 00:05:05.672 Shutdown signal received, stop current app iteration 00:05:05.672 Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 reinitialization... 00:05:05.672 spdk_app_start is called in Round 3. 00:05:05.672 Shutdown signal received, stop current app iteration 00:05:05.672 11:19:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:05.672 11:19:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:05.672 00:05:05.672 real 0m15.569s 00:05:05.672 user 0m33.876s 00:05:05.672 sys 0m2.303s 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.672 11:19:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 ************************************ 00:05:05.672 END TEST app_repeat 00:05:05.672 ************************************ 00:05:05.672 11:19:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:05.672 11:19:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.672 11:19:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.672 11:19:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.672 11:19:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 ************************************ 00:05:05.672 START TEST cpu_locks 00:05:05.672 ************************************ 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.672 * Looking for test storage... 00:05:05.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.672 11:19:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.672 --rc genhtml_branch_coverage=1 00:05:05.672 --rc genhtml_function_coverage=1 00:05:05.672 --rc genhtml_legend=1 00:05:05.672 --rc geninfo_all_blocks=1 00:05:05.672 --rc geninfo_unexecuted_blocks=1 00:05:05.672 00:05:05.672 ' 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.672 --rc genhtml_branch_coverage=1 00:05:05.672 --rc genhtml_function_coverage=1 00:05:05.672 --rc genhtml_legend=1 00:05:05.672 --rc geninfo_all_blocks=1 00:05:05.672 --rc geninfo_unexecuted_blocks=1 00:05:05.672 00:05:05.672 ' 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.672 --rc genhtml_branch_coverage=1 00:05:05.672 --rc genhtml_function_coverage=1 00:05:05.672 --rc genhtml_legend=1 00:05:05.672 --rc geninfo_all_blocks=1 00:05:05.672 --rc geninfo_unexecuted_blocks=1 00:05:05.672 00:05:05.672 ' 00:05:05.672 11:19:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.672 --rc genhtml_branch_coverage=1 00:05:05.672 --rc genhtml_function_coverage=1 00:05:05.672 --rc genhtml_legend=1 00:05:05.672 --rc geninfo_all_blocks=1 00:05:05.672 --rc geninfo_unexecuted_blocks=1 00:05:05.672 00:05:05.672 ' 00:05:05.672 11:19:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.672 11:19:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.673 11:19:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.673 11:19:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.673 11:19:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.673 11:19:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.673 11:19:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.673 ************************************ 00:05:05.673 START TEST default_locks 00:05:05.673 ************************************ 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3293740 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3293740 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3293740 ']' 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.673 11:19:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.934 [2024-12-09 11:19:57.868944] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:05.934 [2024-12-09 11:19:57.869005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293740 ] 00:05:05.934 [2024-12-09 11:19:57.945164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.934 [2024-12-09 11:19:57.986944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.506 11:19:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.506 11:19:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:06.506 11:19:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3293740 00:05:06.506 11:19:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3293740 00:05:06.506 11:19:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.077 lslocks: write error 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3293740 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3293740 ']' 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3293740 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3293740 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3293740' 00:05:07.077 killing process with pid 3293740 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3293740 00:05:07.077 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3293740 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3293740 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3293740 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3293740 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3293740 ']' 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3293740) - No such process 00:05:07.339 ERROR: process (pid: 3293740) is no longer running 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.339 00:05:07.339 real 0m1.519s 00:05:07.339 user 0m1.625s 00:05:07.339 sys 0m0.523s 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.339 11:19:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 ************************************ 00:05:07.339 END TEST default_locks 00:05:07.339 ************************************ 00:05:07.339 11:19:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:07.339 11:19:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.339 11:19:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.339 11:19:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 ************************************ 00:05:07.339 START TEST default_locks_via_rpc 00:05:07.339 ************************************ 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3294107 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3294107 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3294107 ']' 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.339 11:19:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.339 [2024-12-09 11:19:59.454874] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:07.339 [2024-12-09 11:19:59.454925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294107 ] 00:05:07.600 [2024-12-09 11:19:59.527295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.600 [2024-12-09 11:19:59.564276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3294107 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3294107 00:05:08.172 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.742 11:20:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3294107 00:05:08.742 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3294107 ']' 00:05:08.743 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3294107 00:05:08.743 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.743 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.743 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294107 00:05:09.003 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.003 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.003 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294107' 00:05:09.003 killing process with pid 3294107 00:05:09.003 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3294107 00:05:09.003 11:20:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3294107 00:05:09.003 00:05:09.003 real 0m1.715s 00:05:09.003 user 0m1.832s 00:05:09.003 sys 0m0.581s 00:05:09.003 11:20:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.003 11:20:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.003 ************************************ 00:05:09.003 END TEST default_locks_via_rpc 00:05:09.003 ************************************ 00:05:09.003 11:20:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:09.003 11:20:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.003 11:20:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.003 11:20:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 ************************************ 00:05:09.265 START TEST non_locking_app_on_locked_coremask 00:05:09.265 ************************************ 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3294470 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3294470 /var/tmp/spdk.sock 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3294470 ']' 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 11:20:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.265 [2024-12-09 11:20:01.240825] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:09.265 [2024-12-09 11:20:01.240878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294470 ] 00:05:09.265 [2024-12-09 11:20:01.314155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.265 [2024-12-09 11:20:01.352553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3294801 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3294801 /var/tmp/spdk2.sock 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3294801 ']' 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.207 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.207 [2024-12-09 11:20:02.054212] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:10.207 [2024-12-09 11:20:02.054264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294801 ] 00:05:10.207 [2024-12-09 11:20:02.166849] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.207 [2024-12-09 11:20:02.166879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.207 [2024-12-09 11:20:02.239297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.778 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.778 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.778 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3294470 00:05:10.778 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3294470 00:05:10.778 11:20:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.040 lslocks: write error 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3294470 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3294470 ']' 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3294470 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294470 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294470' 00:05:11.040 killing process with pid 3294470 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3294470 00:05:11.040 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3294470 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3294801 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3294801 ']' 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3294801 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294801 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294801' 00:05:11.611 killing process with pid 3294801 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3294801 00:05:11.611 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3294801 00:05:11.872 00:05:11.872 real 0m2.668s 00:05:11.872 user 0m2.943s 00:05:11.872 sys 0m0.778s 00:05:11.872 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.872 11:20:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.872 ************************************ 00:05:11.872 END TEST non_locking_app_on_locked_coremask 00:05:11.872 ************************************ 00:05:11.872 11:20:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:11.872 11:20:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.872 11:20:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.872 11:20:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.872 ************************************ 00:05:11.872 START TEST locking_app_on_unlocked_coremask 00:05:11.872 ************************************ 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3295179 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3295179 /var/tmp/spdk.sock 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3295179 ']' 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.872 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.873 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.873 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.873 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 11:20:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:11.873 [2024-12-09 11:20:03.982436] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:11.873 [2024-12-09 11:20:03.982493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295179 ] 00:05:12.133 [2024-12-09 11:20:04.054890] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.133 [2024-12-09 11:20:04.054919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.133 [2024-12-09 11:20:04.093135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3295205 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3295205 /var/tmp/spdk2.sock 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3295205 ']' 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.704 11:20:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:12.704 [2024-12-09 11:20:04.816527] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:12.704 [2024-12-09 11:20:04.816583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295205 ] 00:05:12.965 [2024-12-09 11:20:04.930940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.965 [2024-12-09 11:20:05.003540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.542 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.542 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.542 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3295205 00:05:13.542 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3295205 00:05:13.542 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.802 lslocks: write error 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3295179 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3295179 ']' 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3295179 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295179 00:05:13.802 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.062 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.062 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295179' 00:05:14.062 killing process with pid 3295179 00:05:14.062 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3295179 00:05:14.062 11:20:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3295179 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3295205 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3295205 ']' 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3295205 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295205 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295205' 00:05:14.323 killing process with pid 3295205 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3295205 00:05:14.323 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3295205 00:05:14.584 00:05:14.584 real 0m2.718s 00:05:14.584 user 0m3.021s 00:05:14.584 sys 0m0.793s 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.584 ************************************ 00:05:14.584 END TEST locking_app_on_unlocked_coremask 00:05:14.584 ************************************ 00:05:14.584 11:20:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:14.584 11:20:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.584 11:20:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.584 11:20:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.584 ************************************ 00:05:14.584 START TEST locking_app_on_locked_coremask 00:05:14.584 ************************************ 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3295698 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3295698 /var/tmp/spdk.sock 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3295698 ']' 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.584 11:20:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.846 [2024-12-09 11:20:06.785607] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:14.846 [2024-12-09 11:20:06.785666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295698 ] 00:05:14.846 [2024-12-09 11:20:06.863666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.846 [2024-12-09 11:20:06.905927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3295898 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3295898 /var/tmp/spdk2.sock 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3295898 /var/tmp/spdk2.sock 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3295898 /var/tmp/spdk2.sock 00:05:15.787 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3295898 ']' 00:05:15.788 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.788 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.788 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.788 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.788 11:20:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.788 [2024-12-09 11:20:07.642789] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:15.788 [2024-12-09 11:20:07.642842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295898 ] 00:05:15.788 [2024-12-09 11:20:07.757070] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3295698 has claimed it. 00:05:15.788 [2024-12-09 11:20:07.757109] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3295898) - No such process 00:05:16.370 ERROR: process (pid: 3295898) is no longer running 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3295698 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3295698 00:05:16.370 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.942 lslocks: write error 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3295698 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3295698 ']' 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3295698 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295698 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295698' 00:05:16.942 killing process with pid 3295698 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3295698 00:05:16.942 11:20:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3295698 00:05:17.201 00:05:17.201 real 0m2.443s 00:05:17.201 user 0m2.754s 00:05:17.201 sys 0m0.673s 00:05:17.201 11:20:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.201 11:20:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.201 ************************************ 00:05:17.201 END TEST locking_app_on_locked_coremask 00:05:17.201 ************************************ 00:05:17.201 11:20:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:17.201 11:20:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.201 11:20:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.201 11:20:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.201 ************************************ 00:05:17.201 START TEST locking_overlapped_coremask 00:05:17.201 ************************************ 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3296261 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3296261 /var/tmp/spdk.sock 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3296261 ']' 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.201 11:20:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.201 [2024-12-09 11:20:09.290248] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:17.201 [2024-12-09 11:20:09.290296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296261 ] 00:05:17.463 [2024-12-09 11:20:09.362986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.463 [2024-12-09 11:20:09.401890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.463 [2024-12-09 11:20:09.402026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.463 [2024-12-09 11:20:09.402056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3296398 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3296398 /var/tmp/spdk2.sock 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3296398 /var/tmp/spdk2.sock 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3296398 /var/tmp/spdk2.sock 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3296398 ']' 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.035 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.035 [2024-12-09 11:20:10.144175] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:18.035 [2024-12-09 11:20:10.144230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296398 ] 00:05:18.296 [2024-12-09 11:20:10.232648] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3296261 has claimed it. 00:05:18.296 [2024-12-09 11:20:10.232679] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3296398) - No such process 00:05:18.867 ERROR: process (pid: 3296398) is no longer running 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3296261 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3296261 ']' 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3296261 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296261 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296261' 00:05:18.867 killing process with pid 3296261 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3296261 00:05:18.867 11:20:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3296261 00:05:19.128 00:05:19.128 real 0m1.803s 00:05:19.128 user 0m5.237s 00:05:19.128 sys 0m0.374s 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 ************************************ 00:05:19.128 END TEST locking_overlapped_coremask 00:05:19.128 ************************************ 00:05:19.128 11:20:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:19.128 11:20:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.128 11:20:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.128 11:20:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 ************************************ 00:05:19.128 START TEST locking_overlapped_coremask_via_rpc 00:05:19.128 ************************************ 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3296637 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3296637 /var/tmp/spdk.sock 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3296637 ']' 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.128 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.128 [2024-12-09 11:20:11.169983] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:19.128 [2024-12-09 11:20:11.170036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296637 ] 00:05:19.128 [2024-12-09 11:20:11.242282] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.128 [2024-12-09 11:20:11.242313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.128 [2024-12-09 11:20:11.281044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.128 [2024-12-09 11:20:11.281120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.128 [2024-12-09 11:20:11.281124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3296859 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3296859 /var/tmp/spdk2.sock 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3296859 ']' 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.071 11:20:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.071 [2024-12-09 11:20:12.023430] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:20.071 [2024-12-09 11:20:12.023485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296859 ] 00:05:20.071 [2024-12-09 11:20:12.110631] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.071 [2024-12-09 11:20:12.110657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.071 [2024-12-09 11:20:12.177141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.071 [2024-12-09 11:20:12.177301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.071 [2024-12-09 11:20:12.177303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:20.654 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.654 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.654 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.654 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.654 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.916 [2024-12-09 11:20:12.826073] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3296637 has claimed it. 00:05:20.916 request: 00:05:20.916 { 00:05:20.916 "method": "framework_enable_cpumask_locks", 00:05:20.916 "req_id": 1 00:05:20.916 } 00:05:20.916 Got JSON-RPC error response 00:05:20.916 response: 00:05:20.916 { 00:05:20.916 "code": -32603, 00:05:20.916 "message": "Failed to claim CPU core: 2" 00:05:20.916 } 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3296637 /var/tmp/spdk.sock 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3296637 ']' 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.916 11:20:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3296859 /var/tmp/spdk2.sock 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3296859 ']' 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.916 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.177 00:05:21.177 real 0m2.088s 00:05:21.177 user 0m0.850s 00:05:21.177 sys 0m0.159s 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.177 11:20:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.177 ************************************ 00:05:21.177 END TEST locking_overlapped_coremask_via_rpc 00:05:21.177 ************************************ 00:05:21.177 11:20:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:21.177 11:20:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3296637 ]] 00:05:21.177 11:20:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3296637 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3296637 ']' 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3296637 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296637 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296637' 00:05:21.177 killing process with pid 3296637 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3296637 00:05:21.177 11:20:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3296637 00:05:21.439 11:20:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3296859 ]] 00:05:21.439 11:20:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3296859 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3296859 ']' 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3296859 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296859 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296859' 00:05:21.439 killing process with pid 3296859 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3296859 00:05:21.439 11:20:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3296859 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3296637 ]] 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3296637 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3296637 ']' 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3296637 00:05:21.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3296637) - No such process 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3296637 is not found' 00:05:21.701 Process with pid 3296637 is not found 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3296859 ]] 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3296859 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3296859 ']' 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3296859 00:05:21.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3296859) - No such process 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3296859 is not found' 00:05:21.701 Process with pid 3296859 is not found 00:05:21.701 11:20:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:21.701 00:05:21.701 real 0m16.199s 00:05:21.701 user 0m28.487s 00:05:21.701 sys 0m4.783s 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.701 11:20:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.701 ************************************ 00:05:21.701 END TEST cpu_locks 00:05:21.701 ************************************ 00:05:21.701 00:05:21.701 real 0m41.066s 00:05:21.701 user 1m19.111s 00:05:21.701 sys 0m8.055s 00:05:21.701 11:20:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.701 11:20:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.701 ************************************ 00:05:21.701 END TEST event 00:05:21.701 ************************************ 00:05:21.701 11:20:13 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:21.701 11:20:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.701 11:20:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.701 11:20:13 -- common/autotest_common.sh@10 -- # set +x 00:05:21.963 ************************************ 00:05:21.963 START TEST thread 00:05:21.963 ************************************ 00:05:21.963 11:20:13 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:21.964 * Looking for test storage... 00:05:21.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:21.964 11:20:13 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.964 11:20:13 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.964 11:20:13 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.964 11:20:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.964 11:20:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.964 11:20:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.964 11:20:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.964 11:20:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.964 11:20:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.964 11:20:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.964 11:20:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.964 11:20:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.964 11:20:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.964 11:20:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.964 11:20:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:21.964 11:20:14 thread -- scripts/common.sh@345 -- # : 1 00:05:21.964 11:20:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.964 11:20:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.964 11:20:14 thread -- scripts/common.sh@365 -- # decimal 1 00:05:21.964 11:20:14 thread -- scripts/common.sh@353 -- # local d=1 00:05:21.964 11:20:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.964 11:20:14 thread -- scripts/common.sh@355 -- # echo 1 00:05:21.964 11:20:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.964 11:20:14 thread -- scripts/common.sh@366 -- # decimal 2 00:05:21.964 11:20:14 thread -- scripts/common.sh@353 -- # local d=2 00:05:21.964 11:20:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.964 11:20:14 thread -- scripts/common.sh@355 -- # echo 2 00:05:21.964 11:20:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.964 11:20:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.964 11:20:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.964 11:20:14 thread -- scripts/common.sh@368 -- # return 0 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.964 --rc genhtml_branch_coverage=1 00:05:21.964 --rc genhtml_function_coverage=1 00:05:21.964 --rc genhtml_legend=1 00:05:21.964 --rc geninfo_all_blocks=1 00:05:21.964 --rc geninfo_unexecuted_blocks=1 00:05:21.964 00:05:21.964 ' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.964 --rc genhtml_branch_coverage=1 00:05:21.964 --rc genhtml_function_coverage=1 00:05:21.964 --rc genhtml_legend=1 00:05:21.964 --rc geninfo_all_blocks=1 00:05:21.964 --rc geninfo_unexecuted_blocks=1 00:05:21.964 00:05:21.964 ' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.964 --rc genhtml_branch_coverage=1 00:05:21.964 --rc genhtml_function_coverage=1 00:05:21.964 --rc genhtml_legend=1 00:05:21.964 --rc geninfo_all_blocks=1 00:05:21.964 --rc geninfo_unexecuted_blocks=1 00:05:21.964 00:05:21.964 ' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.964 --rc genhtml_branch_coverage=1 00:05:21.964 --rc genhtml_function_coverage=1 00:05:21.964 --rc genhtml_legend=1 00:05:21.964 --rc geninfo_all_blocks=1 00:05:21.964 --rc geninfo_unexecuted_blocks=1 00:05:21.964 00:05:21.964 ' 00:05:21.964 11:20:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.964 11:20:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.964 ************************************ 00:05:21.964 START TEST thread_poller_perf 00:05:21.964 ************************************ 00:05:21.964 11:20:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.964 [2024-12-09 11:20:14.119830] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:21.964 [2024-12-09 11:20:14.119935] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297421 ] 00:05:22.226 [2024-12-09 11:20:14.201016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.226 [2024-12-09 11:20:14.242955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.226 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:23.171 [2024-12-09T10:20:15.333Z] ====================================== 00:05:23.171 [2024-12-09T10:20:15.333Z] busy:2409838582 (cyc) 00:05:23.171 [2024-12-09T10:20:15.333Z] total_run_count: 287000 00:05:23.171 [2024-12-09T10:20:15.333Z] tsc_hz: 2400000000 (cyc) 00:05:23.171 [2024-12-09T10:20:15.333Z] ====================================== 00:05:23.171 [2024-12-09T10:20:15.333Z] poller_cost: 8396 (cyc), 3498 (nsec) 00:05:23.171 00:05:23.171 real 0m1.187s 00:05:23.171 user 0m1.107s 00:05:23.171 sys 0m0.076s 00:05:23.171 11:20:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.171 11:20:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.171 ************************************ 00:05:23.171 END TEST thread_poller_perf 00:05:23.171 ************************************ 00:05:23.171 11:20:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.171 11:20:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:23.171 11:20:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.171 11:20:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.433 ************************************ 00:05:23.433 START TEST thread_poller_perf 00:05:23.433 ************************************ 00:05:23.433 11:20:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.434 [2024-12-09 11:20:15.379447] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:23.434 [2024-12-09 11:20:15.379530] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297664 ] 00:05:23.434 [2024-12-09 11:20:15.456142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.434 [2024-12-09 11:20:15.493899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.434 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:24.376 [2024-12-09T10:20:16.538Z] ====================================== 00:05:24.376 [2024-12-09T10:20:16.538Z] busy:2402089280 (cyc) 00:05:24.376 [2024-12-09T10:20:16.538Z] total_run_count: 3502000 00:05:24.376 [2024-12-09T10:20:16.538Z] tsc_hz: 2400000000 (cyc) 00:05:24.376 [2024-12-09T10:20:16.538Z] ====================================== 00:05:24.376 [2024-12-09T10:20:16.538Z] poller_cost: 685 (cyc), 285 (nsec) 00:05:24.376 00:05:24.376 real 0m1.168s 00:05:24.376 user 0m1.097s 00:05:24.376 sys 0m0.067s 00:05:24.376 11:20:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.376 11:20:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.376 ************************************ 00:05:24.376 END TEST thread_poller_perf 00:05:24.376 ************************************ 00:05:24.637 11:20:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:24.637 00:05:24.637 real 0m2.665s 00:05:24.637 user 0m2.347s 00:05:24.637 sys 0m0.322s 00:05:24.637 11:20:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.637 11:20:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.637 ************************************ 00:05:24.637 END TEST thread 00:05:24.637 ************************************ 00:05:24.637 11:20:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:24.637 11:20:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.637 11:20:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.637 11:20:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.637 11:20:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.637 ************************************ 00:05:24.637 START TEST app_cmdline 00:05:24.637 ************************************ 00:05:24.637 11:20:16 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.637 * Looking for test storage... 00:05:24.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:24.637 11:20:16 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.637 11:20:16 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.637 11:20:16 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.898 11:20:16 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:24.898 11:20:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.899 11:20:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:24.899 11:20:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.899 11:20:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.899 11:20:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.899 11:20:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.899 --rc genhtml_branch_coverage=1 00:05:24.899 --rc genhtml_function_coverage=1 00:05:24.899 --rc genhtml_legend=1 00:05:24.899 --rc geninfo_all_blocks=1 00:05:24.899 --rc geninfo_unexecuted_blocks=1 00:05:24.899 00:05:24.899 ' 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.899 --rc genhtml_branch_coverage=1 00:05:24.899 --rc genhtml_function_coverage=1 00:05:24.899 --rc genhtml_legend=1 00:05:24.899 --rc geninfo_all_blocks=1 00:05:24.899 --rc geninfo_unexecuted_blocks=1 00:05:24.899 00:05:24.899 ' 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.899 --rc genhtml_branch_coverage=1 00:05:24.899 --rc genhtml_function_coverage=1 00:05:24.899 --rc genhtml_legend=1 00:05:24.899 --rc geninfo_all_blocks=1 00:05:24.899 --rc geninfo_unexecuted_blocks=1 00:05:24.899 00:05:24.899 ' 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.899 --rc genhtml_branch_coverage=1 00:05:24.899 --rc genhtml_function_coverage=1 00:05:24.899 --rc genhtml_legend=1 00:05:24.899 --rc geninfo_all_blocks=1 00:05:24.899 --rc geninfo_unexecuted_blocks=1 00:05:24.899 00:05:24.899 ' 00:05:24.899 11:20:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:24.899 11:20:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:24.899 11:20:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3297950 00:05:24.899 11:20:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3297950 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3297950 ']' 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.899 11:20:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.899 [2024-12-09 11:20:16.856400] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:24.899 [2024-12-09 11:20:16.856458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297950 ] 00:05:24.899 [2024-12-09 11:20:16.925163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.899 [2024-12-09 11:20:16.962729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.159 11:20:17 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.159 11:20:17 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:25.159 11:20:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:25.159 { 00:05:25.159 "version": "SPDK v25.01-pre git sha1 51286f61a", 00:05:25.159 "fields": { 00:05:25.159 "major": 25, 00:05:25.159 "minor": 1, 00:05:25.159 "patch": 0, 00:05:25.159 "suffix": "-pre", 00:05:25.159 "commit": "51286f61a" 00:05:25.159 } 00:05:25.159 } 00:05:25.159 11:20:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:25.159 11:20:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:25.159 11:20:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:25.159 11:20:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.421 request: 00:05:25.421 { 00:05:25.421 "method": "env_dpdk_get_mem_stats", 00:05:25.421 "req_id": 1 00:05:25.421 } 00:05:25.421 Got JSON-RPC error response 00:05:25.421 response: 00:05:25.421 { 00:05:25.421 "code": -32601, 00:05:25.421 "message": "Method not found" 00:05:25.421 } 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.421 11:20:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3297950 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3297950 ']' 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3297950 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.421 11:20:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3297950 00:05:25.682 11:20:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.682 11:20:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.683 11:20:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3297950' 00:05:25.683 killing process with pid 3297950 00:05:25.683 11:20:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 3297950 00:05:25.683 11:20:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 3297950 00:05:25.683 00:05:25.683 real 0m1.186s 00:05:25.683 user 0m1.429s 00:05:25.683 sys 0m0.398s 00:05:25.683 11:20:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.683 11:20:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.683 ************************************ 00:05:25.683 END TEST app_cmdline 00:05:25.683 ************************************ 00:05:25.945 11:20:17 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.945 11:20:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.945 11:20:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.945 11:20:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 ************************************ 00:05:25.945 START TEST version 00:05:25.945 ************************************ 00:05:25.945 11:20:17 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.945 * Looking for test storage... 00:05:25.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.945 11:20:17 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.945 11:20:17 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.945 11:20:17 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.945 11:20:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.945 11:20:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.945 11:20:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.945 11:20:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.945 11:20:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.945 11:20:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.945 11:20:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.945 11:20:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.945 11:20:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.945 11:20:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.945 11:20:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.945 11:20:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.945 11:20:18 version -- scripts/common.sh@344 -- # case "$op" in 00:05:25.945 11:20:18 version -- scripts/common.sh@345 -- # : 1 00:05:25.945 11:20:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.945 11:20:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.945 11:20:18 version -- scripts/common.sh@365 -- # decimal 1 00:05:25.945 11:20:18 version -- scripts/common.sh@353 -- # local d=1 00:05:25.945 11:20:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.945 11:20:18 version -- scripts/common.sh@355 -- # echo 1 00:05:25.945 11:20:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.945 11:20:18 version -- scripts/common.sh@366 -- # decimal 2 00:05:25.945 11:20:18 version -- scripts/common.sh@353 -- # local d=2 00:05:25.945 11:20:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.945 11:20:18 version -- scripts/common.sh@355 -- # echo 2 00:05:25.945 11:20:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.945 11:20:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.945 11:20:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.945 11:20:18 version -- scripts/common.sh@368 -- # return 0 00:05:25.945 11:20:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.945 11:20:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.945 --rc genhtml_branch_coverage=1 00:05:25.945 --rc genhtml_function_coverage=1 00:05:25.945 --rc genhtml_legend=1 00:05:25.945 --rc geninfo_all_blocks=1 00:05:25.945 --rc geninfo_unexecuted_blocks=1 00:05:25.945 00:05:25.945 ' 00:05:25.945 11:20:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.945 --rc genhtml_branch_coverage=1 00:05:25.945 --rc genhtml_function_coverage=1 00:05:25.945 --rc genhtml_legend=1 00:05:25.946 --rc geninfo_all_blocks=1 00:05:25.946 --rc geninfo_unexecuted_blocks=1 00:05:25.946 00:05:25.946 ' 00:05:25.946 11:20:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.946 --rc genhtml_branch_coverage=1 00:05:25.946 --rc genhtml_function_coverage=1 00:05:25.946 --rc genhtml_legend=1 00:05:25.946 --rc geninfo_all_blocks=1 00:05:25.946 --rc geninfo_unexecuted_blocks=1 00:05:25.946 00:05:25.946 ' 00:05:25.946 11:20:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.946 --rc genhtml_branch_coverage=1 00:05:25.946 --rc genhtml_function_coverage=1 00:05:25.946 --rc genhtml_legend=1 00:05:25.946 --rc geninfo_all_blocks=1 00:05:25.946 --rc geninfo_unexecuted_blocks=1 00:05:25.946 00:05:25.946 ' 00:05:25.946 11:20:18 version -- app/version.sh@17 -- # get_header_version major 00:05:25.946 11:20:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.946 11:20:18 version -- app/version.sh@14 -- # cut -f2 00:05:25.946 11:20:18 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.946 11:20:18 version -- app/version.sh@17 -- # major=25 00:05:25.946 11:20:18 version -- app/version.sh@18 -- # get_header_version minor 00:05:26.207 11:20:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # cut -f2 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.207 11:20:18 version -- app/version.sh@18 -- # minor=1 00:05:26.207 11:20:18 version -- app/version.sh@19 -- # get_header_version patch 00:05:26.207 11:20:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # cut -f2 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.207 11:20:18 version -- app/version.sh@19 -- # patch=0 00:05:26.207 11:20:18 version -- app/version.sh@20 -- # get_header_version suffix 00:05:26.207 11:20:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # cut -f2 00:05:26.207 11:20:18 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.207 11:20:18 version -- app/version.sh@20 -- # suffix=-pre 00:05:26.207 11:20:18 version -- app/version.sh@22 -- # version=25.1 00:05:26.207 11:20:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:26.207 11:20:18 version -- app/version.sh@28 -- # version=25.1rc0 00:05:26.207 11:20:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:26.207 11:20:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:26.207 11:20:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:26.207 11:20:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:26.207 00:05:26.207 real 0m0.276s 00:05:26.207 user 0m0.170s 00:05:26.207 sys 0m0.154s 00:05:26.207 11:20:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.207 11:20:18 version -- common/autotest_common.sh@10 -- # set +x 00:05:26.207 ************************************ 00:05:26.207 END TEST version 00:05:26.207 ************************************ 00:05:26.207 11:20:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:26.207 11:20:18 -- spdk/autotest.sh@194 -- # uname -s 00:05:26.207 11:20:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:26.207 11:20:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:26.207 11:20:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:26.207 11:20:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:26.207 11:20:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.207 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.207 11:20:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:26.207 11:20:18 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:26.207 11:20:18 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.207 11:20:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.207 11:20:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.207 11:20:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.207 ************************************ 00:05:26.207 START TEST nvmf_tcp 00:05:26.207 ************************************ 00:05:26.207 11:20:18 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.470 * Looking for test storage... 00:05:26.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.470 11:20:18 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.470 --rc genhtml_branch_coverage=1 00:05:26.470 --rc genhtml_function_coverage=1 00:05:26.470 --rc genhtml_legend=1 00:05:26.470 --rc geninfo_all_blocks=1 00:05:26.470 --rc geninfo_unexecuted_blocks=1 00:05:26.470 00:05:26.470 ' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.470 --rc genhtml_branch_coverage=1 00:05:26.470 --rc genhtml_function_coverage=1 00:05:26.470 --rc genhtml_legend=1 00:05:26.470 --rc geninfo_all_blocks=1 00:05:26.470 --rc geninfo_unexecuted_blocks=1 00:05:26.470 00:05:26.470 ' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.470 --rc genhtml_branch_coverage=1 00:05:26.470 --rc genhtml_function_coverage=1 00:05:26.470 --rc genhtml_legend=1 00:05:26.470 --rc geninfo_all_blocks=1 00:05:26.470 --rc geninfo_unexecuted_blocks=1 00:05:26.470 00:05:26.470 ' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.470 --rc genhtml_branch_coverage=1 00:05:26.470 --rc genhtml_function_coverage=1 00:05:26.470 --rc genhtml_legend=1 00:05:26.470 --rc geninfo_all_blocks=1 00:05:26.470 --rc geninfo_unexecuted_blocks=1 00:05:26.470 00:05:26.470 ' 00:05:26.470 11:20:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:26.470 11:20:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:26.470 11:20:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.470 11:20:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.470 ************************************ 00:05:26.470 START TEST nvmf_target_core 00:05:26.470 ************************************ 00:05:26.470 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:26.734 * Looking for test storage... 00:05:26.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.734 --rc genhtml_branch_coverage=1 00:05:26.734 --rc genhtml_function_coverage=1 00:05:26.734 --rc genhtml_legend=1 00:05:26.734 --rc geninfo_all_blocks=1 00:05:26.734 --rc geninfo_unexecuted_blocks=1 00:05:26.734 00:05:26.734 ' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.734 --rc genhtml_branch_coverage=1 00:05:26.734 --rc genhtml_function_coverage=1 00:05:26.734 --rc genhtml_legend=1 00:05:26.734 --rc geninfo_all_blocks=1 00:05:26.734 --rc geninfo_unexecuted_blocks=1 00:05:26.734 00:05:26.734 ' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.734 --rc genhtml_branch_coverage=1 00:05:26.734 --rc genhtml_function_coverage=1 00:05:26.734 --rc genhtml_legend=1 00:05:26.734 --rc geninfo_all_blocks=1 00:05:26.734 --rc geninfo_unexecuted_blocks=1 00:05:26.734 00:05:26.734 ' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.734 --rc genhtml_branch_coverage=1 00:05:26.734 --rc genhtml_function_coverage=1 00:05:26.734 --rc genhtml_legend=1 00:05:26.734 --rc geninfo_all_blocks=1 00:05:26.734 --rc geninfo_unexecuted_blocks=1 00:05:26.734 00:05:26.734 ' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:26.734 ************************************ 00:05:26.734 START TEST nvmf_abort 00:05:26.734 ************************************ 00:05:26.734 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:26.997 * Looking for test storage... 00:05:26.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:26.997 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.997 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.997 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.997 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.997 --rc genhtml_branch_coverage=1 00:05:26.997 --rc genhtml_function_coverage=1 00:05:26.997 --rc genhtml_legend=1 00:05:26.997 --rc geninfo_all_blocks=1 00:05:26.997 --rc geninfo_unexecuted_blocks=1 00:05:26.997 00:05:26.997 ' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.997 --rc genhtml_branch_coverage=1 00:05:26.997 --rc genhtml_function_coverage=1 00:05:26.997 --rc genhtml_legend=1 00:05:26.997 --rc geninfo_all_blocks=1 00:05:26.997 --rc geninfo_unexecuted_blocks=1 00:05:26.997 00:05:26.997 ' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.997 --rc genhtml_branch_coverage=1 00:05:26.997 --rc genhtml_function_coverage=1 00:05:26.997 --rc genhtml_legend=1 00:05:26.997 --rc geninfo_all_blocks=1 00:05:26.997 --rc geninfo_unexecuted_blocks=1 00:05:26.997 00:05:26.997 ' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.997 --rc genhtml_branch_coverage=1 00:05:26.997 --rc genhtml_function_coverage=1 00:05:26.997 --rc genhtml_legend=1 00:05:26.997 --rc geninfo_all_blocks=1 00:05:26.997 --rc geninfo_unexecuted_blocks=1 00:05:26.997 00:05:26.997 ' 00:05:26.997 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:26.998 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:35.152 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:35.153 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:35.153 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:35.153 Found net devices under 0000:31:00.0: cvl_0_0 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:35.153 Found net devices under 0000:31:00.1: cvl_0_1 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:35.153 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:35.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:35.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:05:35.154 00:05:35.154 --- 10.0.0.2 ping statistics --- 00:05:35.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.154 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:35.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:35.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:05:35.154 00:05:35.154 --- 10.0.0.1 ping statistics --- 00:05:35.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.154 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3302401 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3302401 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3302401 ']' 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.154 11:20:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.154 [2024-12-09 11:20:26.574884] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:35.154 [2024-12-09 11:20:26.574945] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.154 [2024-12-09 11:20:26.678060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.154 [2024-12-09 11:20:26.731915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.154 [2024-12-09 11:20:26.731968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.154 [2024-12-09 11:20:26.731977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.154 [2024-12-09 11:20:26.731985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.154 [2024-12-09 11:20:26.731991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.154 [2024-12-09 11:20:26.733783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.154 [2024-12-09 11:20:26.733948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.154 [2024-12-09 11:20:26.733948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 [2024-12-09 11:20:27.442434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 Malloc0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 Delay0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 [2024-12-09 11:20:27.517181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.415 11:20:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:35.676 [2024-12-09 11:20:27.646513] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:38.223 Initializing NVMe Controllers 00:05:38.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:38.223 controller IO queue size 128 less than required 00:05:38.223 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:38.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:38.223 Initialization complete. Launching workers. 00:05:38.223 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28897 00:05:38.223 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28958, failed to submit 62 00:05:38.223 success 28901, unsuccessful 57, failed 0 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:38.223 rmmod nvme_tcp 00:05:38.223 rmmod nvme_fabrics 00:05:38.223 rmmod nvme_keyring 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3302401 ']' 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3302401 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3302401 ']' 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3302401 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3302401 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3302401' 00:05:38.223 killing process with pid 3302401 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3302401 00:05:38.223 11:20:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3302401 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:38.223 11:20:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:40.140 00:05:40.140 real 0m13.391s 00:05:40.140 user 0m14.300s 00:05:40.140 sys 0m6.491s 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.140 ************************************ 00:05:40.140 END TEST nvmf_abort 00:05:40.140 ************************************ 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:40.140 ************************************ 00:05:40.140 START TEST nvmf_ns_hotplug_stress 00:05:40.140 ************************************ 00:05:40.140 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:40.403 * Looking for test storage... 00:05:40.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.403 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.404 --rc genhtml_branch_coverage=1 00:05:40.404 --rc genhtml_function_coverage=1 00:05:40.404 --rc genhtml_legend=1 00:05:40.404 --rc geninfo_all_blocks=1 00:05:40.404 --rc geninfo_unexecuted_blocks=1 00:05:40.404 00:05:40.404 ' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.404 --rc genhtml_branch_coverage=1 00:05:40.404 --rc genhtml_function_coverage=1 00:05:40.404 --rc genhtml_legend=1 00:05:40.404 --rc geninfo_all_blocks=1 00:05:40.404 --rc geninfo_unexecuted_blocks=1 00:05:40.404 00:05:40.404 ' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.404 --rc genhtml_branch_coverage=1 00:05:40.404 --rc genhtml_function_coverage=1 00:05:40.404 --rc genhtml_legend=1 00:05:40.404 --rc geninfo_all_blocks=1 00:05:40.404 --rc geninfo_unexecuted_blocks=1 00:05:40.404 00:05:40.404 ' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.404 --rc genhtml_branch_coverage=1 00:05:40.404 --rc genhtml_function_coverage=1 00:05:40.404 --rc genhtml_legend=1 00:05:40.404 --rc geninfo_all_blocks=1 00:05:40.404 --rc geninfo_unexecuted_blocks=1 00:05:40.404 00:05:40.404 ' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:40.404 11:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:48.552 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:48.552 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:48.552 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:48.553 Found net devices under 0000:31:00.0: cvl_0_0 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:48.553 Found net devices under 0000:31:00.1: cvl_0_1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:48.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:48.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:05:48.553 00:05:48.553 --- 10.0.0.2 ping statistics --- 00:05:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.553 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:48.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:48.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:05:48.553 00:05:48.553 --- 10.0.0.1 ping statistics --- 00:05:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:48.553 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:48.553 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3307503 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3307503 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3307503 ']' 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.553 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 [2024-12-09 11:20:40.095722] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:05:48.553 [2024-12-09 11:20:40.095777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:48.553 [2024-12-09 11:20:40.195113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.553 [2024-12-09 11:20:40.245314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:48.553 [2024-12-09 11:20:40.245366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:48.553 [2024-12-09 11:20:40.245374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.553 [2024-12-09 11:20:40.245382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.553 [2024-12-09 11:20:40.245388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:48.553 [2024-12-09 11:20:40.247426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.553 [2024-12-09 11:20:40.247598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.553 [2024-12-09 11:20:40.247599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:48.814 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:49.075 [2024-12-09 11:20:41.105267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.075 11:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:49.336 11:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:49.336 [2024-12-09 11:20:41.474749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:49.598 11:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:49.598 11:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:49.860 Malloc0 00:05:49.860 11:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:50.121 Delay0 00:05:50.121 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.121 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:50.383 NULL1 00:05:50.383 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:50.645 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3307983 00:05:50.645 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:50.645 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:50.645 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.645 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.906 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:50.906 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:51.167 true 00:05:51.167 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:51.167 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.442 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.442 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:51.442 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:51.702 true 00:05:51.702 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:51.702 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.963 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.963 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:51.963 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:52.224 true 00:05:52.224 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:52.224 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.485 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.485 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:52.485 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:52.746 true 00:05:52.746 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:52.746 11:20:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.006 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.267 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:53.267 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:53.267 true 00:05:53.267 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:53.267 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.528 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.789 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:53.789 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:53.789 true 00:05:53.789 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:53.789 11:20:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.050 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.312 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:54.312 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:54.312 true 00:05:54.312 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:54.312 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.574 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.835 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:54.835 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:54.835 true 00:05:54.835 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:54.835 11:20:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.095 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.357 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:55.357 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:55.357 true 00:05:55.618 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:55.618 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.618 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.880 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:55.880 11:20:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:56.141 true 00:05:56.141 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:56.141 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.141 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.402 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:56.402 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:56.664 true 00:05:56.664 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:56.664 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.664 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.925 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:56.925 11:20:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:57.186 true 00:05:57.186 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:57.186 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.186 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.446 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:57.446 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:57.707 true 00:05:57.707 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:57.707 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.967 11:20:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.967 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:57.967 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:58.227 true 00:05:58.227 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:58.227 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.488 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.488 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:58.488 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:58.749 true 00:05:58.749 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:58.749 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.011 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.011 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:59.011 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:59.272 true 00:05:59.272 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:59.272 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.532 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.793 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:59.793 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:59.793 true 00:05:59.793 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:05:59.793 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.054 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.315 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:00.315 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:00.315 true 00:06:00.315 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:00.315 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.575 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.837 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:00.837 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:00.837 true 00:06:00.837 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:00.837 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.097 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.358 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:01.358 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:01.358 true 00:06:01.619 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:01.619 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.619 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.879 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:01.879 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:02.140 true 00:06:02.140 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:02.140 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.140 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.401 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:02.401 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:02.662 true 00:06:02.662 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:02.662 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.662 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.923 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:02.923 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:03.183 true 00:06:03.183 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:03.183 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.444 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.444 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:03.444 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:03.704 true 00:06:03.704 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:03.704 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.964 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.964 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:03.964 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:04.225 true 00:06:04.225 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:04.225 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.486 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.747 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:04.747 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:04.747 true 00:06:04.747 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:04.747 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.008 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.267 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:05.267 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:05.267 true 00:06:05.267 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:05.267 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.528 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.789 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:05.789 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:05.789 true 00:06:05.789 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:05.789 11:20:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.050 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.311 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:06.311 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:06.311 true 00:06:06.311 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:06.311 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.571 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.832 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:06.832 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:06.832 true 00:06:06.832 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:06.832 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.092 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.353 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:07.353 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:07.353 true 00:06:07.614 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:07.614 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.614 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.874 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:07.874 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:08.135 true 00:06:08.135 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:08.135 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.135 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.396 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:08.396 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:08.657 true 00:06:08.657 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:08.657 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.657 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.918 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:08.918 11:21:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:09.179 true 00:06:09.179 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:09.179 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.440 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.440 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:09.440 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:09.700 true 00:06:09.700 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:09.700 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.963 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.963 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:09.963 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:10.224 true 00:06:10.224 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:10.224 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.484 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.484 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:10.484 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:10.744 true 00:06:10.744 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:10.744 11:21:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.005 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.266 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:11.266 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:11.266 true 00:06:11.266 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:11.266 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.527 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.788 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:11.788 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:11.788 true 00:06:11.789 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:11.789 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.058 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.319 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:12.319 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:12.319 true 00:06:12.319 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:12.319 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.581 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.842 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:12.842 11:21:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:12.842 true 00:06:13.125 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:13.125 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.125 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.386 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:13.386 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:13.647 true 00:06:13.647 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:13.647 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.647 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.909 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:13.909 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:14.170 true 00:06:14.170 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:14.171 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.171 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.432 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:14.432 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:14.693 true 00:06:14.693 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:14.693 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.693 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.955 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:14.955 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:15.216 true 00:06:15.216 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:15.216 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.479 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.479 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:15.479 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:15.740 true 00:06:15.740 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:15.741 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.002 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.002 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:16.002 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:16.263 true 00:06:16.263 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:16.263 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.523 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.523 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:16.523 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:16.783 true 00:06:16.783 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:16.783 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.044 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.305 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:17.305 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:17.305 true 00:06:17.305 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:17.305 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.566 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.829 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:17.829 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:17.829 true 00:06:17.829 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:17.829 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.090 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.351 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:18.351 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:18.351 true 00:06:18.613 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:18.614 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.614 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.874 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:18.874 11:21:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:18.874 true 00:06:19.137 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:19.137 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.137 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.398 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:19.398 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:19.658 true 00:06:19.658 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:19.658 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.658 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.919 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:19.919 11:21:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:20.180 true 00:06:20.180 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:20.180 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.180 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.441 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:20.441 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:20.702 true 00:06:20.702 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:20.702 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.963 11:21:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.963 Initializing NVMe Controllers 00:06:20.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.963 Controller IO queue size 128, less than required. 00:06:20.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:20.963 Initialization complete. Launching workers. 00:06:20.963 ======================================================== 00:06:20.963 Latency(us) 00:06:20.963 Device Information : IOPS MiB/s Average min max 00:06:20.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30340.23 14.81 4218.67 1445.59 8566.66 00:06:20.963 ======================================================== 00:06:20.963 Total : 30340.23 14.81 4218.67 1445.59 8566.66 00:06:20.963 00:06:20.963 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:20.963 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:21.225 true 00:06:21.225 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3307983 00:06:21.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3307983) - No such process 00:06:21.225 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3307983 00:06:21.225 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:21.487 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:21.748 null0 00:06:21.748 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:21.748 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:21.748 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:22.012 null1 00:06:22.012 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.012 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.012 11:21:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:22.012 null2 00:06:22.012 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.012 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.012 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:22.272 null3 00:06:22.272 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.272 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.272 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:22.534 null4 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:22.534 null5 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.534 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:22.796 null6 00:06:22.796 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:22.796 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:22.796 11:21:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:23.063 null7 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.063 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3315230 3315231 3315235 3315238 3315241 3315244 3315246 3315250 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.064 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.326 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.589 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.851 11:21:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.114 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.377 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.639 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.901 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.901 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.901 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:24.902 11:21:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.902 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:25.164 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:25.427 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.690 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:25.953 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.215 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.478 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.740 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:27.001 11:21:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:27.001 rmmod nvme_tcp 00:06:27.001 rmmod nvme_fabrics 00:06:27.002 rmmod nvme_keyring 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3307503 ']' 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3307503 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3307503 ']' 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3307503 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3307503 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3307503' 00:06:27.002 killing process with pid 3307503 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3307503 00:06:27.002 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3307503 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.263 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.181 00:06:29.181 real 0m48.977s 00:06:29.181 user 3m20.398s 00:06:29.181 sys 0m16.723s 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 ************************************ 00:06:29.181 END TEST nvmf_ns_hotplug_stress 00:06:29.181 ************************************ 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.181 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.444 ************************************ 00:06:29.444 START TEST nvmf_delete_subsystem 00:06:29.444 ************************************ 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:29.444 * Looking for test storage... 00:06:29.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.444 --rc genhtml_branch_coverage=1 00:06:29.444 --rc genhtml_function_coverage=1 00:06:29.444 --rc genhtml_legend=1 00:06:29.444 --rc geninfo_all_blocks=1 00:06:29.444 --rc geninfo_unexecuted_blocks=1 00:06:29.444 00:06:29.444 ' 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.444 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.445 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.707 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:29.707 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:29.707 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.707 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:37.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:37.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:37.864 Found net devices under 0000:31:00.0: cvl_0_0 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.864 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:37.865 Found net devices under 0000:31:00.1: cvl_0_1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:37.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:06:37.865 00:06:37.865 --- 10.0.0.2 ping statistics --- 00:06:37.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.865 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:37.865 00:06:37.865 --- 10.0.0.1 ping statistics --- 00:06:37.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.865 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.865 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3320565 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3320565 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3320565 ']' 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 [2024-12-09 11:21:29.066818] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:06:37.865 [2024-12-09 11:21:29.066872] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.865 [2024-12-09 11:21:29.148552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.865 [2024-12-09 11:21:29.186359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.865 [2024-12-09 11:21:29.186400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.865 [2024-12-09 11:21:29.186408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.865 [2024-12-09 11:21:29.186414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.865 [2024-12-09 11:21:29.186420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.865 [2024-12-09 11:21:29.187655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.865 [2024-12-09 11:21:29.187657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 [2024-12-09 11:21:29.902070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.865 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.865 [2024-12-09 11:21:29.926276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.866 NULL1 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.866 Delay0 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3320605 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:37.866 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:38.127 [2024-12-09 11:21:30.023131] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:40.041 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:40.041 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.041 11:21:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 [2024-12-09 11:21:32.067850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbaf00 is same with the state(6) to be set 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 [2024-12-09 11:21:32.071810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb560000c40 is same with the state(6) to be set 00:06:40.042 starting I/O failed: -6 00:06:40.042 starting I/O failed: -6 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Read completed with error (sct=0, sc=8) 00:06:40.042 Write completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.043 Read completed with error (sct=0, sc=8) 00:06:40.043 Write completed with error (sct=0, sc=8) 00:06:40.987 [2024-12-09 11:21:33.039730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbc5f0 is same with the state(6) to be set 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 [2024-12-09 11:21:33.071051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbb0e0 is same with the state(6) to be set 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 [2024-12-09 11:21:33.071502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbb4a0 is same with the state(6) to be set 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 [2024-12-09 11:21:33.074135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb56000d020 is same with the state(6) to be set 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Read completed with error (sct=0, sc=8) 00:06:40.987 Write completed with error (sct=0, sc=8) 00:06:40.987 [2024-12-09 11:21:33.074221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb56000d7e0 is same with the state(6) to be set 00:06:40.987 Initializing NVMe Controllers 00:06:40.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:40.987 Controller IO queue size 128, less than required. 00:06:40.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:40.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:40.987 Initialization complete. Launching workers. 00:06:40.987 ======================================================== 00:06:40.987 Latency(us) 00:06:40.987 Device Information : IOPS MiB/s Average min max 00:06:40.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.21 0.09 886322.33 268.50 1007711.08 00:06:40.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.24 0.08 905965.02 319.87 1010765.17 00:06:40.987 ======================================================== 00:06:40.987 Total : 340.45 0.17 895913.94 268.50 1010765.17 00:06:40.987 00:06:40.987 [2024-12-09 11:21:33.074793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbc5f0 (9): Bad file descriptor 00:06:40.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:40.987 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.987 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:40.987 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3320605 00:06:40.987 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3320605 00:06:41.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3320605) - No such process 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3320605 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3320605 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3320605 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 [2024-12-09 11:21:33.607298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3321388 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:41.560 11:21:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.560 [2024-12-09 11:21:33.684115] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:42.133 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.133 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:42.133 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.704 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.704 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:42.704 11:21:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.277 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.277 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:43.278 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.538 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.538 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:43.538 11:21:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.109 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.109 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:44.109 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.680 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.680 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:44.680 11:21:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.680 Initializing NVMe Controllers 00:06:44.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:44.680 Controller IO queue size 128, less than required. 00:06:44.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:44.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:44.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:44.680 Initialization complete. Launching workers. 00:06:44.680 ======================================================== 00:06:44.680 Latency(us) 00:06:44.680 Device Information : IOPS MiB/s Average min max 00:06:44.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002428.14 1000222.39 1008809.45 00:06:44.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003052.34 1000316.62 1009265.58 00:06:44.680 ======================================================== 00:06:44.680 Total : 256.00 0.12 1002740.24 1000222.39 1009265.58 00:06:44.680 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3321388 00:06:45.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3321388) - No such process 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3321388 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:45.252 rmmod nvme_tcp 00:06:45.252 rmmod nvme_fabrics 00:06:45.252 rmmod nvme_keyring 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3320565 ']' 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3320565 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3320565 ']' 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3320565 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3320565 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3320565' 00:06:45.252 killing process with pid 3320565 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3320565 00:06:45.252 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3320565 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.514 11:21:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:47.432 00:06:47.432 real 0m18.142s 00:06:47.432 user 0m30.398s 00:06:47.432 sys 0m6.674s 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.432 ************************************ 00:06:47.432 END TEST nvmf_delete_subsystem 00:06:47.432 ************************************ 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.432 ************************************ 00:06:47.432 START TEST nvmf_host_management 00:06:47.432 ************************************ 00:06:47.432 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:47.695 * Looking for test storage... 00:06:47.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.695 --rc genhtml_branch_coverage=1 00:06:47.695 --rc genhtml_function_coverage=1 00:06:47.695 --rc genhtml_legend=1 00:06:47.695 --rc geninfo_all_blocks=1 00:06:47.695 --rc geninfo_unexecuted_blocks=1 00:06:47.695 00:06:47.695 ' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.695 --rc genhtml_branch_coverage=1 00:06:47.695 --rc genhtml_function_coverage=1 00:06:47.695 --rc genhtml_legend=1 00:06:47.695 --rc geninfo_all_blocks=1 00:06:47.695 --rc geninfo_unexecuted_blocks=1 00:06:47.695 00:06:47.695 ' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.695 --rc genhtml_branch_coverage=1 00:06:47.695 --rc genhtml_function_coverage=1 00:06:47.695 --rc genhtml_legend=1 00:06:47.695 --rc geninfo_all_blocks=1 00:06:47.695 --rc geninfo_unexecuted_blocks=1 00:06:47.695 00:06:47.695 ' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.695 --rc genhtml_branch_coverage=1 00:06:47.695 --rc genhtml_function_coverage=1 00:06:47.695 --rc genhtml_legend=1 00:06:47.695 --rc geninfo_all_blocks=1 00:06:47.695 --rc geninfo_unexecuted_blocks=1 00:06:47.695 00:06:47.695 ' 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:47.695 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:47.696 11:21:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.845 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.845 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:55.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:55.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:55.846 Found net devices under 0000:31:00.0: cvl_0_0 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:55.846 Found net devices under 0000:31:00.1: cvl_0_1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:55.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:06:55.846 00:06:55.846 --- 10.0.0.2 ping statistics --- 00:06:55.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.846 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:06:55.846 00:06:55.846 --- 10.0.0.1 ping statistics --- 00:06:55.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.846 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.846 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3326451 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3326451 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3326451 ']' 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.847 11:21:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.847 [2024-12-09 11:21:47.459927] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:06:55.847 [2024-12-09 11:21:47.459995] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.847 [2024-12-09 11:21:47.563456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.847 [2024-12-09 11:21:47.616493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.847 [2024-12-09 11:21:47.616548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.847 [2024-12-09 11:21:47.616557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.847 [2024-12-09 11:21:47.616564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.847 [2024-12-09 11:21:47.616571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.847 [2024-12-09 11:21:47.618600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.847 [2024-12-09 11:21:47.618766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.847 [2024-12-09 11:21:47.618931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:55.847 [2024-12-09 11:21:47.618931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.108 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.108 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:56.108 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.108 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.108 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 [2024-12-09 11:21:48.312754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 Malloc0 00:06:56.368 [2024-12-09 11:21:48.385299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3326743 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3326743 /var/tmp/bdevperf.sock 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3326743 ']' 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:56.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:56.368 { 00:06:56.368 "params": { 00:06:56.368 "name": "Nvme$subsystem", 00:06:56.368 "trtype": "$TEST_TRANSPORT", 00:06:56.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:56.368 "adrfam": "ipv4", 00:06:56.368 "trsvcid": "$NVMF_PORT", 00:06:56.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:56.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:56.368 "hdgst": ${hdgst:-false}, 00:06:56.368 "ddgst": ${ddgst:-false} 00:06:56.368 }, 00:06:56.368 "method": "bdev_nvme_attach_controller" 00:06:56.368 } 00:06:56.368 EOF 00:06:56.368 )") 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:56.368 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:56.369 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:56.369 11:21:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:56.369 "params": { 00:06:56.369 "name": "Nvme0", 00:06:56.369 "trtype": "tcp", 00:06:56.369 "traddr": "10.0.0.2", 00:06:56.369 "adrfam": "ipv4", 00:06:56.369 "trsvcid": "4420", 00:06:56.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:56.369 "hdgst": false, 00:06:56.369 "ddgst": false 00:06:56.369 }, 00:06:56.369 "method": "bdev_nvme_attach_controller" 00:06:56.369 }' 00:06:56.369 [2024-12-09 11:21:48.500008] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:06:56.369 [2024-12-09 11:21:48.500067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326743 ] 00:06:56.629 [2024-12-09 11:21:48.571810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.629 [2024-12-09 11:21:48.608082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.890 Running I/O for 10 seconds... 00:06:57.151 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.151 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:57.151 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:57.415 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.415 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.415 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.416 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.416 [2024-12-09 11:21:49.376545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.376996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.377002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.377009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ed40 is same with the state(6) to be set 00:06:57.416 [2024-12-09 11:21:49.377470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.416 [2024-12-09 11:21:49.377508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.416 [2024-12-09 11:21:49.377528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.416 [2024-12-09 11:21:49.377537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.377990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.377998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.417 [2024-12-09 11:21:49.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.417 [2024-12-09 11:21:49.378247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:57.418 [2024-12-09 11:21:49.378627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.378636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149eb0 is same with the state(6) to be set 00:06:57.418 [2024-12-09 11:21:49.379902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.418 task offset: 106496 on job bdev=Nvme0n1 fails 00:06:57.418 00:06:57.418 Latency(us) 00:06:57.418 [2024-12-09T10:21:49.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.418 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:57.418 Job: Nvme0n1 ended in about 0.58 seconds with error 00:06:57.418 Verification LBA range: start 0x0 length 0x400 00:06:57.418 Nvme0n1 : 0.58 1444.52 90.28 111.12 0.00 40165.36 7591.25 34515.63 00:06:57.418 [2024-12-09T10:21:49.580Z] =================================================================================================================== 00:06:57.418 [2024-12-09T10:21:49.580Z] Total : 1444.52 90.28 111.12 0.00 40165.36 7591.25 34515.63 00:06:57.418 [2024-12-09 11:21:49.381929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.418 [2024-12-09 11:21:49.381955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139b10 (9): Bad file descriptor 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 [2024-12-09 11:21:49.388265] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:57.418 [2024-12-09 11:21:49.388350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:57.418 [2024-12-09 11:21:49.388374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:57.418 [2024-12-09 11:21:49.388390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:57.418 [2024-12-09 11:21:49.388399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:57.418 [2024-12-09 11:21:49.388415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:57.418 [2024-12-09 11:21:49.388423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2139b10 00:06:57.418 [2024-12-09 11:21:49.388442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139b10 (9): Bad file descriptor 00:06:57.418 [2024-12-09 11:21:49.388455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:57.418 [2024-12-09 11:21:49.388464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:57.418 [2024-12-09 11:21:49.388473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:57.418 [2024-12-09 11:21:49.388482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.418 11:21:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3326743 00:06:58.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3326743) - No such process 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:58.363 { 00:06:58.363 "params": { 00:06:58.363 "name": "Nvme$subsystem", 00:06:58.363 "trtype": "$TEST_TRANSPORT", 00:06:58.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:58.363 "adrfam": "ipv4", 00:06:58.363 "trsvcid": "$NVMF_PORT", 00:06:58.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:58.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:58.363 "hdgst": ${hdgst:-false}, 00:06:58.363 "ddgst": ${ddgst:-false} 00:06:58.363 }, 00:06:58.363 "method": "bdev_nvme_attach_controller" 00:06:58.363 } 00:06:58.363 EOF 00:06:58.363 )") 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:58.363 11:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:58.363 "params": { 00:06:58.363 "name": "Nvme0", 00:06:58.363 "trtype": "tcp", 00:06:58.363 "traddr": "10.0.0.2", 00:06:58.363 "adrfam": "ipv4", 00:06:58.363 "trsvcid": "4420", 00:06:58.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:58.363 "hdgst": false, 00:06:58.363 "ddgst": false 00:06:58.363 }, 00:06:58.363 "method": "bdev_nvme_attach_controller" 00:06:58.363 }' 00:06:58.363 [2024-12-09 11:21:50.452949] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:06:58.363 [2024-12-09 11:21:50.453001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327092 ] 00:06:58.624 [2024-12-09 11:21:50.524473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.624 [2024-12-09 11:21:50.560140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.624 Running I/O for 1 seconds... 00:07:00.013 2154.00 IOPS, 134.62 MiB/s 00:07:00.013 Latency(us) 00:07:00.013 [2024-12-09T10:21:52.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.013 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:00.013 Verification LBA range: start 0x0 length 0x400 00:07:00.013 Nvme0n1 : 1.06 2099.25 131.20 0.00 0.00 28674.88 2676.05 46749.01 00:07:00.013 [2024-12-09T10:21:52.175Z] =================================================================================================================== 00:07:00.013 [2024-12-09T10:21:52.175Z] Total : 2099.25 131.20 0.00 0.00 28674.88 2676.05 46749.01 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.013 11:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:00.013 rmmod nvme_tcp 00:07:00.013 rmmod nvme_fabrics 00:07:00.013 rmmod nvme_keyring 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3326451 ']' 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3326451 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3326451 ']' 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3326451 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3326451 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3326451' 00:07:00.013 killing process with pid 3326451 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3326451 00:07:00.013 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3326451 00:07:00.276 [2024-12-09 11:21:52.176355] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.276 11:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:02.194 00:07:02.194 real 0m14.703s 00:07:02.194 user 0m23.044s 00:07:02.194 sys 0m6.751s 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.194 ************************************ 00:07:02.194 END TEST nvmf_host_management 00:07:02.194 ************************************ 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.194 11:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.457 ************************************ 00:07:02.457 START TEST nvmf_lvol 00:07:02.457 ************************************ 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:02.457 * Looking for test storage... 00:07:02.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.457 --rc genhtml_branch_coverage=1 00:07:02.457 --rc genhtml_function_coverage=1 00:07:02.457 --rc genhtml_legend=1 00:07:02.457 --rc geninfo_all_blocks=1 00:07:02.457 --rc geninfo_unexecuted_blocks=1 00:07:02.457 00:07:02.457 ' 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.457 --rc genhtml_branch_coverage=1 00:07:02.457 --rc genhtml_function_coverage=1 00:07:02.457 --rc genhtml_legend=1 00:07:02.457 --rc geninfo_all_blocks=1 00:07:02.457 --rc geninfo_unexecuted_blocks=1 00:07:02.457 00:07:02.457 ' 00:07:02.457 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.457 --rc genhtml_branch_coverage=1 00:07:02.457 --rc genhtml_function_coverage=1 00:07:02.457 --rc genhtml_legend=1 00:07:02.458 --rc geninfo_all_blocks=1 00:07:02.458 --rc geninfo_unexecuted_blocks=1 00:07:02.458 00:07:02.458 ' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.458 --rc genhtml_branch_coverage=1 00:07:02.458 --rc genhtml_function_coverage=1 00:07:02.458 --rc genhtml_legend=1 00:07:02.458 --rc geninfo_all_blocks=1 00:07:02.458 --rc geninfo_unexecuted_blocks=1 00:07:02.458 00:07:02.458 ' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.458 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.720 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:02.721 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:02.721 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:02.721 11:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:10.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:10.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:10.867 Found net devices under 0000:31:00.0: cvl_0_0 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:10.867 Found net devices under 0000:31:00.1: cvl_0_1 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.867 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.868 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:07:10.868 00:07:10.868 --- 10.0.0.2 ping statistics --- 00:07:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.868 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:10.868 00:07:10.868 --- 10.0.0.1 ping statistics --- 00:07:10.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.868 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3331842 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3331842 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3331842 ']' 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.868 [2024-12-09 11:22:02.184612] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:07:10.868 [2024-12-09 11:22:02.184678] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.868 [2024-12-09 11:22:02.269346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.868 [2024-12-09 11:22:02.310896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.868 [2024-12-09 11:22:02.310934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.868 [2024-12-09 11:22:02.310942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.868 [2024-12-09 11:22:02.310949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.868 [2024-12-09 11:22:02.310955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.868 [2024-12-09 11:22:02.312371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.868 [2024-12-09 11:22:02.312488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.868 [2024-12-09 11:22:02.312491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.868 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.129 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.129 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:11.129 [2024-12-09 11:22:03.180800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.129 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.390 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:11.390 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.651 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:11.651 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:11.651 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:11.913 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=11143a19-4bae-4a72-a865-2ebfdd467365 00:07:11.913 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11143a19-4bae-4a72-a865-2ebfdd467365 lvol 20 00:07:12.186 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=35b0f260-da24-4012-acda-af8623ceae8d 00:07:12.186 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.448 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 35b0f260-da24-4012-acda-af8623ceae8d 00:07:12.448 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.711 [2024-12-09 11:22:04.679377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.711 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.711 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3332405 00:07:12.711 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:12.711 11:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:14.098 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 35b0f260-da24-4012-acda-af8623ceae8d MY_SNAPSHOT 00:07:14.098 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=82e4b6ad-fb7a-4c7a-8078-7c38d6d99a76 00:07:14.098 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 35b0f260-da24-4012-acda-af8623ceae8d 30 00:07:14.359 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 82e4b6ad-fb7a-4c7a-8078-7c38d6d99a76 MY_CLONE 00:07:14.621 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=59bcdbf1-5ad8-4f9b-b45e-6d87a3e4a203 00:07:14.621 11:22:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 59bcdbf1-5ad8-4f9b-b45e-6d87a3e4a203 00:07:14.883 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3332405 00:07:23.035 Initializing NVMe Controllers 00:07:23.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:23.035 Controller IO queue size 128, less than required. 00:07:23.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:23.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:23.035 Initialization complete. Launching workers. 00:07:23.035 ======================================================== 00:07:23.035 Latency(us) 00:07:23.035 Device Information : IOPS MiB/s Average min max 00:07:23.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12267.70 47.92 10436.26 1649.62 56055.95 00:07:23.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16781.30 65.55 7628.99 3746.78 57550.92 00:07:23.035 ======================================================== 00:07:23.035 Total : 29049.00 113.47 8814.53 1649.62 57550.92 00:07:23.035 00:07:23.035 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.297 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 35b0f260-da24-4012-acda-af8623ceae8d 00:07:23.558 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11143a19-4bae-4a72-a865-2ebfdd467365 00:07:23.558 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.820 rmmod nvme_tcp 00:07:23.820 rmmod nvme_fabrics 00:07:23.820 rmmod nvme_keyring 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3331842 ']' 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3331842 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3331842 ']' 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3331842 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3331842 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3331842' 00:07:23.820 killing process with pid 3331842 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3331842 00:07:23.820 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3331842 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.081 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.081 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.081 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.081 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.081 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.996 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:25.996 00:07:25.996 real 0m23.714s 00:07:25.996 user 1m3.847s 00:07:25.996 sys 0m8.612s 00:07:25.996 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.996 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:25.996 ************************************ 00:07:25.996 END TEST nvmf_lvol 00:07:25.996 ************************************ 00:07:25.997 11:22:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:25.997 11:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.997 11:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.997 11:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.258 ************************************ 00:07:26.259 START TEST nvmf_lvs_grow 00:07:26.259 ************************************ 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.259 * Looking for test storage... 00:07:26.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.259 --rc genhtml_branch_coverage=1 00:07:26.259 --rc genhtml_function_coverage=1 00:07:26.259 --rc genhtml_legend=1 00:07:26.259 --rc geninfo_all_blocks=1 00:07:26.259 --rc geninfo_unexecuted_blocks=1 00:07:26.259 00:07:26.259 ' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.259 --rc genhtml_branch_coverage=1 00:07:26.259 --rc genhtml_function_coverage=1 00:07:26.259 --rc genhtml_legend=1 00:07:26.259 --rc geninfo_all_blocks=1 00:07:26.259 --rc geninfo_unexecuted_blocks=1 00:07:26.259 00:07:26.259 ' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.259 --rc genhtml_branch_coverage=1 00:07:26.259 --rc genhtml_function_coverage=1 00:07:26.259 --rc genhtml_legend=1 00:07:26.259 --rc geninfo_all_blocks=1 00:07:26.259 --rc geninfo_unexecuted_blocks=1 00:07:26.259 00:07:26.259 ' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.259 --rc genhtml_branch_coverage=1 00:07:26.259 --rc genhtml_function_coverage=1 00:07:26.259 --rc genhtml_legend=1 00:07:26.259 --rc geninfo_all_blocks=1 00:07:26.259 --rc geninfo_unexecuted_blocks=1 00:07:26.259 00:07:26.259 ' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.259 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.260 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:34.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.405 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:34.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:34.406 Found net devices under 0000:31:00.0: cvl_0_0 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:34.406 Found net devices under 0000:31:00.1: cvl_0_1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:07:34.406 00:07:34.406 --- 10.0.0.2 ping statistics --- 00:07:34.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.406 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:34.406 00:07:34.406 --- 10.0.0.1 ping statistics --- 00:07:34.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.406 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3338979 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3338979 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3338979 ']' 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.406 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.406 [2024-12-09 11:22:25.865096] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:07:34.406 [2024-12-09 11:22:25.865147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.406 [2024-12-09 11:22:25.942830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.406 [2024-12-09 11:22:25.977363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.406 [2024-12-09 11:22:25.977396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.406 [2024-12-09 11:22:25.977404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.406 [2024-12-09 11:22:25.977410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.406 [2024-12-09 11:22:25.977416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.406 [2024-12-09 11:22:25.978009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.667 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:34.928 [2024-12-09 11:22:26.851621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.928 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.929 ************************************ 00:07:34.929 START TEST lvs_grow_clean 00:07:34.929 ************************************ 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.929 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.190 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:35.190 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:35.190 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:35.190 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:35.190 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:35.452 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:35.452 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:35.452 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c14de7bd-356a-4f39-8e16-4a5e095ab932 lvol 150 00:07:35.713 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5aeeab5-dc74-4769-b2e1-b01d8f032e50 00:07:35.713 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.713 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:35.713 [2024-12-09 11:22:27.811680] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:35.713 [2024-12-09 11:22:27.811733] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:35.713 true 00:07:35.713 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:35.713 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.974 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.974 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.236 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5aeeab5-dc74-4769-b2e1-b01d8f032e50 00:07:36.236 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.498 [2024-12-09 11:22:28.469725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3339530 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3339530 /var/tmp/bdevperf.sock 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3339530 ']' 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.498 11:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.759 [2024-12-09 11:22:28.684815] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:07:36.759 [2024-12-09 11:22:28.684865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339530 ] 00:07:36.759 [2024-12-09 11:22:28.773221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.759 [2024-12-09 11:22:28.809681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.703 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.703 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:37.703 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.965 Nvme0n1 00:07:37.965 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.965 [ 00:07:37.965 { 00:07:37.965 "name": "Nvme0n1", 00:07:37.965 "aliases": [ 00:07:37.965 "f5aeeab5-dc74-4769-b2e1-b01d8f032e50" 00:07:37.965 ], 00:07:37.966 "product_name": "NVMe disk", 00:07:37.966 "block_size": 4096, 00:07:37.966 "num_blocks": 38912, 00:07:37.966 "uuid": "f5aeeab5-dc74-4769-b2e1-b01d8f032e50", 00:07:37.966 "numa_id": 0, 00:07:37.966 "assigned_rate_limits": { 00:07:37.966 "rw_ios_per_sec": 0, 00:07:37.966 "rw_mbytes_per_sec": 0, 00:07:37.966 "r_mbytes_per_sec": 0, 00:07:37.966 "w_mbytes_per_sec": 0 00:07:37.966 }, 00:07:37.966 "claimed": false, 00:07:37.966 "zoned": false, 00:07:37.966 "supported_io_types": { 00:07:37.966 "read": true, 00:07:37.966 "write": true, 00:07:37.966 "unmap": true, 00:07:37.966 "flush": true, 00:07:37.966 "reset": true, 00:07:37.966 "nvme_admin": true, 00:07:37.966 "nvme_io": true, 00:07:37.966 "nvme_io_md": false, 00:07:37.966 "write_zeroes": true, 00:07:37.966 "zcopy": false, 00:07:37.966 "get_zone_info": false, 00:07:37.966 "zone_management": false, 00:07:37.966 "zone_append": false, 00:07:37.966 "compare": true, 00:07:37.966 "compare_and_write": true, 00:07:37.966 "abort": true, 00:07:37.966 "seek_hole": false, 00:07:37.966 "seek_data": false, 00:07:37.966 "copy": true, 00:07:37.966 "nvme_iov_md": false 00:07:37.966 }, 00:07:37.966 "memory_domains": [ 00:07:37.966 { 00:07:37.966 "dma_device_id": "system", 00:07:37.966 "dma_device_type": 1 00:07:37.966 } 00:07:37.966 ], 00:07:37.966 "driver_specific": { 00:07:37.966 "nvme": [ 00:07:37.966 { 00:07:37.966 "trid": { 00:07:37.966 "trtype": "TCP", 00:07:37.966 "adrfam": "IPv4", 00:07:37.966 "traddr": "10.0.0.2", 00:07:37.966 "trsvcid": "4420", 00:07:37.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.966 }, 00:07:37.966 "ctrlr_data": { 00:07:37.966 "cntlid": 1, 00:07:37.966 "vendor_id": "0x8086", 00:07:37.966 "model_number": "SPDK bdev Controller", 00:07:37.966 "serial_number": "SPDK0", 00:07:37.966 "firmware_revision": "25.01", 00:07:37.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.966 "oacs": { 00:07:37.966 "security": 0, 00:07:37.966 "format": 0, 00:07:37.966 "firmware": 0, 00:07:37.966 "ns_manage": 0 00:07:37.966 }, 00:07:37.966 "multi_ctrlr": true, 00:07:37.966 "ana_reporting": false 00:07:37.966 }, 00:07:37.966 "vs": { 00:07:37.966 "nvme_version": "1.3" 00:07:37.966 }, 00:07:37.966 "ns_data": { 00:07:37.966 "id": 1, 00:07:37.966 "can_share": true 00:07:37.966 } 00:07:37.966 } 00:07:37.966 ], 00:07:37.966 "mp_policy": "active_passive" 00:07:37.966 } 00:07:37.966 } 00:07:37.966 ] 00:07:37.966 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3339737 00:07:37.966 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:37.966 11:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.227 Running I/O for 10 seconds... 00:07:39.168 Latency(us) 00:07:39.168 [2024-12-09T10:22:31.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.168 Nvme0n1 : 1.00 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:07:39.168 [2024-12-09T10:22:31.330Z] =================================================================================================================== 00:07:39.168 [2024-12-09T10:22:31.330Z] Total : 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:07:39.168 00:07:40.110 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:40.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.110 Nvme0n1 : 2.00 17975.00 70.21 0.00 0.00 0.00 0.00 0.00 00:07:40.110 [2024-12-09T10:22:32.272Z] =================================================================================================================== 00:07:40.110 [2024-12-09T10:22:32.272Z] Total : 17975.00 70.21 0.00 0.00 0.00 0.00 0.00 00:07:40.110 00:07:40.110 true 00:07:40.370 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:40.370 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.370 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.370 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.370 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3339737 00:07:41.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.312 Nvme0n1 : 3.00 18031.33 70.43 0.00 0.00 0.00 0.00 0.00 00:07:41.312 [2024-12-09T10:22:33.474Z] =================================================================================================================== 00:07:41.312 [2024-12-09T10:22:33.474Z] Total : 18031.33 70.43 0.00 0.00 0.00 0.00 0.00 00:07:41.312 00:07:42.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.254 Nvme0n1 : 4.00 18065.25 70.57 0.00 0.00 0.00 0.00 0.00 00:07:42.254 [2024-12-09T10:22:34.416Z] =================================================================================================================== 00:07:42.254 [2024-12-09T10:22:34.416Z] Total : 18065.25 70.57 0.00 0.00 0.00 0.00 0.00 00:07:42.254 00:07:43.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.196 Nvme0n1 : 5.00 18094.80 70.68 0.00 0.00 0.00 0.00 0.00 00:07:43.196 [2024-12-09T10:22:35.358Z] =================================================================================================================== 00:07:43.196 [2024-12-09T10:22:35.358Z] Total : 18094.80 70.68 0.00 0.00 0.00 0.00 0.00 00:07:43.196 00:07:44.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.138 Nvme0n1 : 6.00 18132.00 70.83 0.00 0.00 0.00 0.00 0.00 00:07:44.138 [2024-12-09T10:22:36.300Z] =================================================================================================================== 00:07:44.138 [2024-12-09T10:22:36.300Z] Total : 18132.00 70.83 0.00 0.00 0.00 0.00 0.00 00:07:44.138 00:07:45.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.081 Nvme0n1 : 7.00 18141.71 70.87 0.00 0.00 0.00 0.00 0.00 00:07:45.081 [2024-12-09T10:22:37.243Z] =================================================================================================================== 00:07:45.081 [2024-12-09T10:22:37.243Z] Total : 18141.71 70.87 0.00 0.00 0.00 0.00 0.00 00:07:45.081 00:07:46.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.469 Nvme0n1 : 8.00 18159.50 70.94 0.00 0.00 0.00 0.00 0.00 00:07:46.469 [2024-12-09T10:22:38.631Z] =================================================================================================================== 00:07:46.469 [2024-12-09T10:22:38.631Z] Total : 18159.50 70.94 0.00 0.00 0.00 0.00 0.00 00:07:46.469 00:07:47.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.051 Nvme0n1 : 9.00 18179.89 71.02 0.00 0.00 0.00 0.00 0.00 00:07:47.051 [2024-12-09T10:22:39.213Z] =================================================================================================================== 00:07:47.051 [2024-12-09T10:22:39.213Z] Total : 18179.89 71.02 0.00 0.00 0.00 0.00 0.00 00:07:47.051 00:07:48.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.439 Nvme0n1 : 10.00 18182.60 71.03 0.00 0.00 0.00 0.00 0.00 00:07:48.439 [2024-12-09T10:22:40.601Z] =================================================================================================================== 00:07:48.439 [2024-12-09T10:22:40.601Z] Total : 18182.60 71.03 0.00 0.00 0.00 0.00 0.00 00:07:48.439 00:07:48.439 00:07:48.439 Latency(us) 00:07:48.439 [2024-12-09T10:22:40.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.439 Nvme0n1 : 10.00 18188.95 71.05 0.00 0.00 7034.50 4259.84 13161.81 00:07:48.439 [2024-12-09T10:22:40.601Z] =================================================================================================================== 00:07:48.439 [2024-12-09T10:22:40.601Z] Total : 18188.95 71.05 0.00 0.00 7034.50 4259.84 13161.81 00:07:48.439 { 00:07:48.439 "results": [ 00:07:48.439 { 00:07:48.439 "job": "Nvme0n1", 00:07:48.439 "core_mask": "0x2", 00:07:48.439 "workload": "randwrite", 00:07:48.439 "status": "finished", 00:07:48.439 "queue_depth": 128, 00:07:48.439 "io_size": 4096, 00:07:48.439 "runtime": 10.003544, 00:07:48.439 "iops": 18188.95383476096, 00:07:48.439 "mibps": 71.050600917035, 00:07:48.439 "io_failed": 0, 00:07:48.439 "io_timeout": 0, 00:07:48.439 "avg_latency_us": 7034.497797611851, 00:07:48.439 "min_latency_us": 4259.84, 00:07:48.439 "max_latency_us": 13161.813333333334 00:07:48.439 } 00:07:48.439 ], 00:07:48.439 "core_count": 1 00:07:48.439 } 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3339530 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3339530 ']' 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3339530 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339530 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339530' 00:07:48.439 killing process with pid 3339530 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3339530 00:07:48.439 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.439 00:07:48.439 Latency(us) 00:07:48.439 [2024-12-09T10:22:40.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.439 [2024-12-09T10:22:40.601Z] =================================================================================================================== 00:07:48.439 [2024-12-09T10:22:40.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3339530 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.439 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.701 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:48.701 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.962 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.962 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:48.962 11:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.962 [2024-12-09 11:22:41.084218] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.223 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:49.224 request: 00:07:49.224 { 00:07:49.224 "uuid": "c14de7bd-356a-4f39-8e16-4a5e095ab932", 00:07:49.224 "method": "bdev_lvol_get_lvstores", 00:07:49.224 "req_id": 1 00:07:49.224 } 00:07:49.224 Got JSON-RPC error response 00:07:49.224 response: 00:07:49.224 { 00:07:49.224 "code": -19, 00:07:49.224 "message": "No such device" 00:07:49.224 } 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.224 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.485 aio_bdev 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5aeeab5-dc74-4769-b2e1-b01d8f032e50 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f5aeeab5-dc74-4769-b2e1-b01d8f032e50 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.485 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f5aeeab5-dc74-4769-b2e1-b01d8f032e50 -t 2000 00:07:49.747 [ 00:07:49.747 { 00:07:49.747 "name": "f5aeeab5-dc74-4769-b2e1-b01d8f032e50", 00:07:49.747 "aliases": [ 00:07:49.747 "lvs/lvol" 00:07:49.747 ], 00:07:49.747 "product_name": "Logical Volume", 00:07:49.747 "block_size": 4096, 00:07:49.747 "num_blocks": 38912, 00:07:49.747 "uuid": "f5aeeab5-dc74-4769-b2e1-b01d8f032e50", 00:07:49.747 "assigned_rate_limits": { 00:07:49.747 "rw_ios_per_sec": 0, 00:07:49.747 "rw_mbytes_per_sec": 0, 00:07:49.747 "r_mbytes_per_sec": 0, 00:07:49.747 "w_mbytes_per_sec": 0 00:07:49.747 }, 00:07:49.747 "claimed": false, 00:07:49.747 "zoned": false, 00:07:49.747 "supported_io_types": { 00:07:49.747 "read": true, 00:07:49.747 "write": true, 00:07:49.747 "unmap": true, 00:07:49.747 "flush": false, 00:07:49.747 "reset": true, 00:07:49.747 "nvme_admin": false, 00:07:49.747 "nvme_io": false, 00:07:49.747 "nvme_io_md": false, 00:07:49.747 "write_zeroes": true, 00:07:49.747 "zcopy": false, 00:07:49.747 "get_zone_info": false, 00:07:49.747 "zone_management": false, 00:07:49.747 "zone_append": false, 00:07:49.747 "compare": false, 00:07:49.747 "compare_and_write": false, 00:07:49.747 "abort": false, 00:07:49.747 "seek_hole": true, 00:07:49.747 "seek_data": true, 00:07:49.747 "copy": false, 00:07:49.747 "nvme_iov_md": false 00:07:49.747 }, 00:07:49.747 "driver_specific": { 00:07:49.747 "lvol": { 00:07:49.747 "lvol_store_uuid": "c14de7bd-356a-4f39-8e16-4a5e095ab932", 00:07:49.747 "base_bdev": "aio_bdev", 00:07:49.747 "thin_provision": false, 00:07:49.747 "num_allocated_clusters": 38, 00:07:49.747 "snapshot": false, 00:07:49.747 "clone": false, 00:07:49.747 "esnap_clone": false 00:07:49.747 } 00:07:49.747 } 00:07:49.747 } 00:07:49.747 ] 00:07:49.747 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:49.747 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:49.747 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:50.009 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:50.009 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:50.009 11:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:50.009 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:50.009 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f5aeeab5-dc74-4769-b2e1-b01d8f032e50 00:07:50.271 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c14de7bd-356a-4f39-8e16-4a5e095ab932 00:07:50.533 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.533 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.795 00:07:50.795 real 0m15.788s 00:07:50.795 user 0m15.557s 00:07:50.795 sys 0m1.304s 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 ************************************ 00:07:50.795 END TEST lvs_grow_clean 00:07:50.795 ************************************ 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 ************************************ 00:07:50.795 START TEST lvs_grow_dirty 00:07:50.795 ************************************ 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.795 11:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.056 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.056 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.056 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:07:51.056 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:07:51.056 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.317 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.317 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.317 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce64388b-65b8-4adf-b0cf-7341e5d3208d lvol 150 00:07:51.578 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1641b003-8bcc-4fde-a512-c3fff3c2349a 00:07:51.578 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.578 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:51.578 [2024-12-09 11:22:43.673705] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:51.578 [2024-12-09 11:22:43.673767] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:51.578 true 00:07:51.578 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:07:51.578 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:51.839 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.840 11:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.100 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1641b003-8bcc-4fde-a512-c3fff3c2349a 00:07:52.100 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.361 [2024-12-09 11:22:44.315671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3342784 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3342784 /var/tmp/bdevperf.sock 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3342784 ']' 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.361 11:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.622 [2024-12-09 11:22:44.542751] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:07:52.622 [2024-12-09 11:22:44.542804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342784 ] 00:07:52.622 [2024-12-09 11:22:44.627502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.622 [2024-12-09 11:22:44.657410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.192 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.192 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:53.192 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.763 Nvme0n1 00:07:53.763 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.763 [ 00:07:53.763 { 00:07:53.763 "name": "Nvme0n1", 00:07:53.763 "aliases": [ 00:07:53.763 "1641b003-8bcc-4fde-a512-c3fff3c2349a" 00:07:53.763 ], 00:07:53.763 "product_name": "NVMe disk", 00:07:53.763 "block_size": 4096, 00:07:53.763 "num_blocks": 38912, 00:07:53.763 "uuid": "1641b003-8bcc-4fde-a512-c3fff3c2349a", 00:07:53.763 "numa_id": 0, 00:07:53.763 "assigned_rate_limits": { 00:07:53.763 "rw_ios_per_sec": 0, 00:07:53.763 "rw_mbytes_per_sec": 0, 00:07:53.763 "r_mbytes_per_sec": 0, 00:07:53.763 "w_mbytes_per_sec": 0 00:07:53.763 }, 00:07:53.763 "claimed": false, 00:07:53.763 "zoned": false, 00:07:53.763 "supported_io_types": { 00:07:53.763 "read": true, 00:07:53.763 "write": true, 00:07:53.763 "unmap": true, 00:07:53.763 "flush": true, 00:07:53.763 "reset": true, 00:07:53.763 "nvme_admin": true, 00:07:53.763 "nvme_io": true, 00:07:53.763 "nvme_io_md": false, 00:07:53.763 "write_zeroes": true, 00:07:53.763 "zcopy": false, 00:07:53.763 "get_zone_info": false, 00:07:53.763 "zone_management": false, 00:07:53.763 "zone_append": false, 00:07:53.763 "compare": true, 00:07:53.763 "compare_and_write": true, 00:07:53.763 "abort": true, 00:07:53.763 "seek_hole": false, 00:07:53.763 "seek_data": false, 00:07:53.763 "copy": true, 00:07:53.763 "nvme_iov_md": false 00:07:53.763 }, 00:07:53.763 "memory_domains": [ 00:07:53.763 { 00:07:53.763 "dma_device_id": "system", 00:07:53.763 "dma_device_type": 1 00:07:53.763 } 00:07:53.763 ], 00:07:53.763 "driver_specific": { 00:07:53.763 "nvme": [ 00:07:53.763 { 00:07:53.763 "trid": { 00:07:53.763 "trtype": "TCP", 00:07:53.763 "adrfam": "IPv4", 00:07:53.763 "traddr": "10.0.0.2", 00:07:53.763 "trsvcid": "4420", 00:07:53.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.763 }, 00:07:53.763 "ctrlr_data": { 00:07:53.763 "cntlid": 1, 00:07:53.763 "vendor_id": "0x8086", 00:07:53.763 "model_number": "SPDK bdev Controller", 00:07:53.763 "serial_number": "SPDK0", 00:07:53.763 "firmware_revision": "25.01", 00:07:53.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.763 "oacs": { 00:07:53.763 "security": 0, 00:07:53.763 "format": 0, 00:07:53.763 "firmware": 0, 00:07:53.763 "ns_manage": 0 00:07:53.763 }, 00:07:53.763 "multi_ctrlr": true, 00:07:53.763 "ana_reporting": false 00:07:53.763 }, 00:07:53.763 "vs": { 00:07:53.763 "nvme_version": "1.3" 00:07:53.763 }, 00:07:53.763 "ns_data": { 00:07:53.763 "id": 1, 00:07:53.763 "can_share": true 00:07:53.763 } 00:07:53.763 } 00:07:53.763 ], 00:07:53.763 "mp_policy": "active_passive" 00:07:53.763 } 00:07:53.763 } 00:07:53.763 ] 00:07:53.763 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3343076 00:07:53.763 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.763 11:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.023 Running I/O for 10 seconds... 00:07:54.966 Latency(us) 00:07:54.966 [2024-12-09T10:22:47.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.966 Nvme0n1 : 1.00 18011.00 70.36 0.00 0.00 0.00 0.00 0.00 00:07:54.966 [2024-12-09T10:22:47.128Z] =================================================================================================================== 00:07:54.966 [2024-12-09T10:22:47.128Z] Total : 18011.00 70.36 0.00 0.00 0.00 0.00 0.00 00:07:54.966 00:07:55.909 11:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:07:55.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.909 Nvme0n1 : 2.00 18081.00 70.63 0.00 0.00 0.00 0.00 0.00 00:07:55.909 [2024-12-09T10:22:48.071Z] =================================================================================================================== 00:07:55.909 [2024-12-09T10:22:48.071Z] Total : 18081.00 70.63 0.00 0.00 0.00 0.00 0.00 00:07:55.909 00:07:56.171 true 00:07:56.171 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:07:56.171 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.171 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:56.171 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:56.171 11:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3343076 00:07:57.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.115 Nvme0n1 : 3.00 18090.67 70.67 0.00 0.00 0.00 0.00 0.00 00:07:57.115 [2024-12-09T10:22:49.277Z] =================================================================================================================== 00:07:57.115 [2024-12-09T10:22:49.277Z] Total : 18090.67 70.67 0.00 0.00 0.00 0.00 0.00 00:07:57.115 00:07:58.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.058 Nvme0n1 : 4.00 18138.75 70.85 0.00 0.00 0.00 0.00 0.00 00:07:58.058 [2024-12-09T10:22:50.220Z] =================================================================================================================== 00:07:58.058 [2024-12-09T10:22:50.220Z] Total : 18138.75 70.85 0.00 0.00 0.00 0.00 0.00 00:07:58.058 00:07:59.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.002 Nvme0n1 : 5.00 18144.80 70.88 0.00 0.00 0.00 0.00 0.00 00:07:59.002 [2024-12-09T10:22:51.164Z] =================================================================================================================== 00:07:59.002 [2024-12-09T10:22:51.164Z] Total : 18144.80 70.88 0.00 0.00 0.00 0.00 0.00 00:07:59.002 00:07:59.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.946 Nvme0n1 : 6.00 18163.67 70.95 0.00 0.00 0.00 0.00 0.00 00:07:59.946 [2024-12-09T10:22:52.108Z] =================================================================================================================== 00:07:59.946 [2024-12-09T10:22:52.108Z] Total : 18163.67 70.95 0.00 0.00 0.00 0.00 0.00 00:07:59.946 00:08:00.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.890 Nvme0n1 : 7.00 18187.43 71.04 0.00 0.00 0.00 0.00 0.00 00:08:00.890 [2024-12-09T10:22:53.052Z] =================================================================================================================== 00:08:00.890 [2024-12-09T10:22:53.052Z] Total : 18187.43 71.04 0.00 0.00 0.00 0.00 0.00 00:08:00.890 00:08:02.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.276 Nvme0n1 : 8.00 18205.25 71.11 0.00 0.00 0.00 0.00 0.00 00:08:02.276 [2024-12-09T10:22:54.438Z] =================================================================================================================== 00:08:02.276 [2024-12-09T10:22:54.438Z] Total : 18205.25 71.11 0.00 0.00 0.00 0.00 0.00 00:08:02.276 00:08:02.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.849 Nvme0n1 : 9.00 18210.56 71.13 0.00 0.00 0.00 0.00 0.00 00:08:02.849 [2024-12-09T10:22:55.011Z] =================================================================================================================== 00:08:02.849 [2024-12-09T10:22:55.011Z] Total : 18210.56 71.13 0.00 0.00 0.00 0.00 0.00 00:08:02.849 00:08:04.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.237 Nvme0n1 : 10.00 18228.90 71.21 0.00 0.00 0.00 0.00 0.00 00:08:04.237 [2024-12-09T10:22:56.399Z] =================================================================================================================== 00:08:04.237 [2024-12-09T10:22:56.399Z] Total : 18228.90 71.21 0.00 0.00 0.00 0.00 0.00 00:08:04.237 00:08:04.237 00:08:04.237 Latency(us) 00:08:04.237 [2024-12-09T10:22:56.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.237 Nvme0n1 : 10.00 18227.75 71.20 0.00 0.00 7018.79 2990.08 13434.88 00:08:04.237 [2024-12-09T10:22:56.399Z] =================================================================================================================== 00:08:04.237 [2024-12-09T10:22:56.399Z] Total : 18227.75 71.20 0.00 0.00 7018.79 2990.08 13434.88 00:08:04.237 { 00:08:04.237 "results": [ 00:08:04.237 { 00:08:04.237 "job": "Nvme0n1", 00:08:04.237 "core_mask": "0x2", 00:08:04.237 "workload": "randwrite", 00:08:04.237 "status": "finished", 00:08:04.237 "queue_depth": 128, 00:08:04.237 "io_size": 4096, 00:08:04.237 "runtime": 10.004199, 00:08:04.237 "iops": 18227.746169383478, 00:08:04.237 "mibps": 71.20213347415421, 00:08:04.237 "io_failed": 0, 00:08:04.237 "io_timeout": 0, 00:08:04.237 "avg_latency_us": 7018.792031616161, 00:08:04.237 "min_latency_us": 2990.08, 00:08:04.237 "max_latency_us": 13434.88 00:08:04.237 } 00:08:04.237 ], 00:08:04.237 "core_count": 1 00:08:04.237 } 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3342784 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3342784 ']' 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3342784 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3342784 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3342784' 00:08:04.237 killing process with pid 3342784 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3342784 00:08:04.237 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.237 00:08:04.237 Latency(us) 00:08:04.237 [2024-12-09T10:22:56.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.237 [2024-12-09T10:22:56.399Z] =================================================================================================================== 00:08:04.237 [2024-12-09T10:22:56.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3342784 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.237 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.498 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:04.498 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3338979 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3338979 00:08:04.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3338979 Killed "${NVMF_APP[@]}" "$@" 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3345157 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3345157 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3345157 ']' 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.759 11:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.759 [2024-12-09 11:22:56.787516] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:04.760 [2024-12-09 11:22:56.787572] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.760 [2024-12-09 11:22:56.866225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.760 [2024-12-09 11:22:56.900870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.760 [2024-12-09 11:22:56.900902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.760 [2024-12-09 11:22:56.900911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.760 [2024-12-09 11:22:56.900917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.760 [2024-12-09 11:22:56.900923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.760 [2024-12-09 11:22:56.901491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.701 [2024-12-09 11:22:57.769351] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:05.701 [2024-12-09 11:22:57.769442] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:05.701 [2024-12-09 11:22:57.769472] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1641b003-8bcc-4fde-a512-c3fff3c2349a 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1641b003-8bcc-4fde-a512-c3fff3c2349a 00:08:05.701 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.702 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:05.702 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.702 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.702 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.962 11:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1641b003-8bcc-4fde-a512-c3fff3c2349a -t 2000 00:08:05.962 [ 00:08:05.962 { 00:08:05.962 "name": "1641b003-8bcc-4fde-a512-c3fff3c2349a", 00:08:05.962 "aliases": [ 00:08:05.962 "lvs/lvol" 00:08:05.962 ], 00:08:05.962 "product_name": "Logical Volume", 00:08:05.962 "block_size": 4096, 00:08:05.962 "num_blocks": 38912, 00:08:05.962 "uuid": "1641b003-8bcc-4fde-a512-c3fff3c2349a", 00:08:05.962 "assigned_rate_limits": { 00:08:05.962 "rw_ios_per_sec": 0, 00:08:05.962 "rw_mbytes_per_sec": 0, 00:08:05.962 "r_mbytes_per_sec": 0, 00:08:05.962 "w_mbytes_per_sec": 0 00:08:05.962 }, 00:08:05.962 "claimed": false, 00:08:05.962 "zoned": false, 00:08:05.962 "supported_io_types": { 00:08:05.962 "read": true, 00:08:05.962 "write": true, 00:08:05.962 "unmap": true, 00:08:05.962 "flush": false, 00:08:05.962 "reset": true, 00:08:05.962 "nvme_admin": false, 00:08:05.962 "nvme_io": false, 00:08:05.962 "nvme_io_md": false, 00:08:05.962 "write_zeroes": true, 00:08:05.962 "zcopy": false, 00:08:05.962 "get_zone_info": false, 00:08:05.962 "zone_management": false, 00:08:05.962 "zone_append": false, 00:08:05.962 "compare": false, 00:08:05.962 "compare_and_write": false, 00:08:05.962 "abort": false, 00:08:05.962 "seek_hole": true, 00:08:05.962 "seek_data": true, 00:08:05.962 "copy": false, 00:08:05.962 "nvme_iov_md": false 00:08:05.962 }, 00:08:05.962 "driver_specific": { 00:08:05.962 "lvol": { 00:08:05.962 "lvol_store_uuid": "ce64388b-65b8-4adf-b0cf-7341e5d3208d", 00:08:05.962 "base_bdev": "aio_bdev", 00:08:05.962 "thin_provision": false, 00:08:05.962 "num_allocated_clusters": 38, 00:08:05.962 "snapshot": false, 00:08:05.962 "clone": false, 00:08:05.962 "esnap_clone": false 00:08:05.962 } 00:08:05.962 } 00:08:05.962 } 00:08:05.962 ] 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:06.223 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:06.484 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:06.484 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.484 [2024-12-09 11:22:58.621525] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:06.745 request: 00:08:06.745 { 00:08:06.745 "uuid": "ce64388b-65b8-4adf-b0cf-7341e5d3208d", 00:08:06.745 "method": "bdev_lvol_get_lvstores", 00:08:06.745 "req_id": 1 00:08:06.745 } 00:08:06.745 Got JSON-RPC error response 00:08:06.745 response: 00:08:06.745 { 00:08:06.745 "code": -19, 00:08:06.745 "message": "No such device" 00:08:06.745 } 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.745 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.007 aio_bdev 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1641b003-8bcc-4fde-a512-c3fff3c2349a 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1641b003-8bcc-4fde-a512-c3fff3c2349a 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.007 11:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.007 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1641b003-8bcc-4fde-a512-c3fff3c2349a -t 2000 00:08:07.269 [ 00:08:07.269 { 00:08:07.269 "name": "1641b003-8bcc-4fde-a512-c3fff3c2349a", 00:08:07.269 "aliases": [ 00:08:07.269 "lvs/lvol" 00:08:07.269 ], 00:08:07.269 "product_name": "Logical Volume", 00:08:07.269 "block_size": 4096, 00:08:07.269 "num_blocks": 38912, 00:08:07.269 "uuid": "1641b003-8bcc-4fde-a512-c3fff3c2349a", 00:08:07.269 "assigned_rate_limits": { 00:08:07.269 "rw_ios_per_sec": 0, 00:08:07.269 "rw_mbytes_per_sec": 0, 00:08:07.269 "r_mbytes_per_sec": 0, 00:08:07.269 "w_mbytes_per_sec": 0 00:08:07.269 }, 00:08:07.269 "claimed": false, 00:08:07.269 "zoned": false, 00:08:07.269 "supported_io_types": { 00:08:07.269 "read": true, 00:08:07.269 "write": true, 00:08:07.269 "unmap": true, 00:08:07.269 "flush": false, 00:08:07.269 "reset": true, 00:08:07.269 "nvme_admin": false, 00:08:07.269 "nvme_io": false, 00:08:07.269 "nvme_io_md": false, 00:08:07.269 "write_zeroes": true, 00:08:07.269 "zcopy": false, 00:08:07.269 "get_zone_info": false, 00:08:07.269 "zone_management": false, 00:08:07.269 "zone_append": false, 00:08:07.269 "compare": false, 00:08:07.269 "compare_and_write": false, 00:08:07.269 "abort": false, 00:08:07.269 "seek_hole": true, 00:08:07.269 "seek_data": true, 00:08:07.269 "copy": false, 00:08:07.269 "nvme_iov_md": false 00:08:07.269 }, 00:08:07.269 "driver_specific": { 00:08:07.269 "lvol": { 00:08:07.269 "lvol_store_uuid": "ce64388b-65b8-4adf-b0cf-7341e5d3208d", 00:08:07.269 "base_bdev": "aio_bdev", 00:08:07.269 "thin_provision": false, 00:08:07.269 "num_allocated_clusters": 38, 00:08:07.269 "snapshot": false, 00:08:07.269 "clone": false, 00:08:07.269 "esnap_clone": false 00:08:07.269 } 00:08:07.269 } 00:08:07.269 } 00:08:07.269 ] 00:08:07.269 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:07.269 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:07.269 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:07.530 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:07.530 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:07.530 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:07.530 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:07.530 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1641b003-8bcc-4fde-a512-c3fff3c2349a 00:08:07.791 11:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce64388b-65b8-4adf-b0cf-7341e5d3208d 00:08:08.053 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.053 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.314 00:08:08.314 real 0m17.445s 00:08:08.314 user 0m45.681s 00:08:08.314 sys 0m2.944s 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.314 ************************************ 00:08:08.314 END TEST lvs_grow_dirty 00:08:08.314 ************************************ 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:08.314 nvmf_trace.0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.314 rmmod nvme_tcp 00:08:08.314 rmmod nvme_fabrics 00:08:08.314 rmmod nvme_keyring 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3345157 ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3345157 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3345157 ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3345157 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3345157 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3345157' 00:08:08.314 killing process with pid 3345157 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3345157 00:08:08.314 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3345157 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.576 11:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.128 00:08:11.128 real 0m44.506s 00:08:11.128 user 1m7.589s 00:08:11.128 sys 0m10.275s 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 ************************************ 00:08:11.128 END TEST nvmf_lvs_grow 00:08:11.128 ************************************ 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 ************************************ 00:08:11.128 START TEST nvmf_bdev_io_wait 00:08:11.128 ************************************ 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.128 * Looking for test storage... 00:08:11.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.128 --rc genhtml_branch_coverage=1 00:08:11.128 --rc genhtml_function_coverage=1 00:08:11.128 --rc genhtml_legend=1 00:08:11.128 --rc geninfo_all_blocks=1 00:08:11.128 --rc geninfo_unexecuted_blocks=1 00:08:11.128 00:08:11.128 ' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.128 --rc genhtml_branch_coverage=1 00:08:11.128 --rc genhtml_function_coverage=1 00:08:11.128 --rc genhtml_legend=1 00:08:11.128 --rc geninfo_all_blocks=1 00:08:11.128 --rc geninfo_unexecuted_blocks=1 00:08:11.128 00:08:11.128 ' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.128 --rc genhtml_branch_coverage=1 00:08:11.128 --rc genhtml_function_coverage=1 00:08:11.128 --rc genhtml_legend=1 00:08:11.128 --rc geninfo_all_blocks=1 00:08:11.128 --rc geninfo_unexecuted_blocks=1 00:08:11.128 00:08:11.128 ' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.128 --rc genhtml_branch_coverage=1 00:08:11.128 --rc genhtml_function_coverage=1 00:08:11.128 --rc genhtml_legend=1 00:08:11.128 --rc geninfo_all_blocks=1 00:08:11.128 --rc geninfo_unexecuted_blocks=1 00:08:11.128 00:08:11.128 ' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.128 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.129 11:23:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:17.724 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:17.724 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:17.724 Found net devices under 0000:31:00.0: cvl_0_0 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:17.724 Found net devices under 0000:31:00.1: cvl_0_1 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.724 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.725 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:08:17.987 00:08:17.987 --- 10.0.0.2 ping statistics --- 00:08:17.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.987 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:08:17.987 00:08:17.987 --- 10.0.0.1 ping statistics --- 00:08:17.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.987 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.987 11:23:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3350295 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3350295 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3350295 ']' 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.987 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.987 [2024-12-09 11:23:10.077969] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:17.987 [2024-12-09 11:23:10.078051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.249 [2024-12-09 11:23:10.163184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.249 [2024-12-09 11:23:10.206366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.249 [2024-12-09 11:23:10.206403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.249 [2024-12-09 11:23:10.206411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.249 [2024-12-09 11:23:10.206418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.249 [2024-12-09 11:23:10.206424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.249 [2024-12-09 11:23:10.208274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.249 [2024-12-09 11:23:10.208396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.249 [2024-12-09 11:23:10.208553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.249 [2024-12-09 11:23:10.208554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.823 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.085 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.085 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.085 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.085 11:23:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.085 [2024-12-09 11:23:10.997053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.085 Malloc0 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.085 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.086 [2024-12-09 11:23:11.056296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3350399 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3350403 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.086 { 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme$subsystem", 00:08:19.086 "trtype": "$TEST_TRANSPORT", 00:08:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "$NVMF_PORT", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.086 "hdgst": ${hdgst:-false}, 00:08:19.086 "ddgst": ${ddgst:-false} 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 } 00:08:19.086 EOF 00:08:19.086 )") 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3350406 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3350410 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.086 { 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme$subsystem", 00:08:19.086 "trtype": "$TEST_TRANSPORT", 00:08:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "$NVMF_PORT", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.086 "hdgst": ${hdgst:-false}, 00:08:19.086 "ddgst": ${ddgst:-false} 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 } 00:08:19.086 EOF 00:08:19.086 )") 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.086 { 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme$subsystem", 00:08:19.086 "trtype": "$TEST_TRANSPORT", 00:08:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "$NVMF_PORT", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.086 "hdgst": ${hdgst:-false}, 00:08:19.086 "ddgst": ${ddgst:-false} 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 } 00:08:19.086 EOF 00:08:19.086 )") 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.086 { 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme$subsystem", 00:08:19.086 "trtype": "$TEST_TRANSPORT", 00:08:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "$NVMF_PORT", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.086 "hdgst": ${hdgst:-false}, 00:08:19.086 "ddgst": ${ddgst:-false} 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 } 00:08:19.086 EOF 00:08:19.086 )") 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3350399 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme1", 00:08:19.086 "trtype": "tcp", 00:08:19.086 "traddr": "10.0.0.2", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "4420", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.086 "hdgst": false, 00:08:19.086 "ddgst": false 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 }' 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme1", 00:08:19.086 "trtype": "tcp", 00:08:19.086 "traddr": "10.0.0.2", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "4420", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.086 "hdgst": false, 00:08:19.086 "ddgst": false 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 }' 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme1", 00:08:19.086 "trtype": "tcp", 00:08:19.086 "traddr": "10.0.0.2", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "4420", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.086 "hdgst": false, 00:08:19.086 "ddgst": false 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 }' 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:19.086 11:23:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.086 "params": { 00:08:19.086 "name": "Nvme1", 00:08:19.086 "trtype": "tcp", 00:08:19.086 "traddr": "10.0.0.2", 00:08:19.086 "adrfam": "ipv4", 00:08:19.086 "trsvcid": "4420", 00:08:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:19.086 "hdgst": false, 00:08:19.086 "ddgst": false 00:08:19.086 }, 00:08:19.086 "method": "bdev_nvme_attach_controller" 00:08:19.086 }' 00:08:19.086 [2024-12-09 11:23:11.112923] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:19.087 [2024-12-09 11:23:11.112975] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:19.087 [2024-12-09 11:23:11.114578] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:19.087 [2024-12-09 11:23:11.114626] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:19.087 [2024-12-09 11:23:11.119471] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:19.087 [2024-12-09 11:23:11.119519] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:19.087 [2024-12-09 11:23:11.121470] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:19.087 [2024-12-09 11:23:11.121518] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:19.349 [2024-12-09 11:23:11.268946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.349 [2024-12-09 11:23:11.297995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:19.349 [2024-12-09 11:23:11.315341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.349 [2024-12-09 11:23:11.344239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:19.349 [2024-12-09 11:23:11.363024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.349 [2024-12-09 11:23:11.391852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:19.349 [2024-12-09 11:23:11.410682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.349 [2024-12-09 11:23:11.438616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:19.349 Running I/O for 1 seconds... 00:08:19.349 Running I/O for 1 seconds... 00:08:19.611 Running I/O for 1 seconds... 00:08:19.611 Running I/O for 1 seconds... 00:08:20.556 14181.00 IOPS, 55.39 MiB/s 00:08:20.556 Latency(us) 00:08:20.556 [2024-12-09T10:23:12.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.556 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:20.556 Nvme1n1 : 1.01 14243.70 55.64 0.00 0.00 8959.39 4369.07 16384.00 00:08:20.556 [2024-12-09T10:23:12.718Z] =================================================================================================================== 00:08:20.556 [2024-12-09T10:23:12.718Z] Total : 14243.70 55.64 0.00 0.00 8959.39 4369.07 16384.00 00:08:20.556 11183.00 IOPS, 43.68 MiB/s 00:08:20.556 Latency(us) 00:08:20.556 [2024-12-09T10:23:12.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.556 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:20.556 Nvme1n1 : 1.01 11253.09 43.96 0.00 0.00 11334.35 5160.96 21736.11 00:08:20.556 [2024-12-09T10:23:12.718Z] =================================================================================================================== 00:08:20.556 [2024-12-09T10:23:12.718Z] Total : 11253.09 43.96 0.00 0.00 11334.35 5160.96 21736.11 00:08:20.556 11628.00 IOPS, 45.42 MiB/s 00:08:20.556 Latency(us) 00:08:20.556 [2024-12-09T10:23:12.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.556 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:20.556 Nvme1n1 : 1.01 11690.04 45.66 0.00 0.00 10915.18 4450.99 17257.81 00:08:20.556 [2024-12-09T10:23:12.718Z] =================================================================================================================== 00:08:20.556 [2024-12-09T10:23:12.718Z] Total : 11690.04 45.66 0.00 0.00 10915.18 4450.99 17257.81 00:08:20.556 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3350403 00:08:20.556 178384.00 IOPS, 696.81 MiB/s 00:08:20.556 Latency(us) 00:08:20.556 [2024-12-09T10:23:12.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.556 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:20.556 Nvme1n1 : 1.00 178029.00 695.43 0.00 0.00 714.87 300.37 1966.08 00:08:20.556 [2024-12-09T10:23:12.718Z] =================================================================================================================== 00:08:20.556 [2024-12-09T10:23:12.718Z] Total : 178029.00 695.43 0.00 0.00 714.87 300.37 1966.08 00:08:20.556 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3350406 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3350410 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.818 rmmod nvme_tcp 00:08:20.818 rmmod nvme_fabrics 00:08:20.818 rmmod nvme_keyring 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3350295 ']' 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3350295 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3350295 ']' 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3350295 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350295 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350295' 00:08:20.818 killing process with pid 3350295 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3350295 00:08:20.818 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3350295 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:21.080 11:23:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.080 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.080 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.080 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.080 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.080 11:23:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.999 00:08:22.999 real 0m12.332s 00:08:22.999 user 0m18.124s 00:08:22.999 sys 0m6.804s 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.999 ************************************ 00:08:22.999 END TEST nvmf_bdev_io_wait 00:08:22.999 ************************************ 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.999 11:23:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.261 ************************************ 00:08:23.261 START TEST nvmf_queue_depth 00:08:23.261 ************************************ 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:23.261 * Looking for test storage... 00:08:23.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.261 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.262 --rc genhtml_branch_coverage=1 00:08:23.262 --rc genhtml_function_coverage=1 00:08:23.262 --rc genhtml_legend=1 00:08:23.262 --rc geninfo_all_blocks=1 00:08:23.262 --rc geninfo_unexecuted_blocks=1 00:08:23.262 00:08:23.262 ' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.262 --rc genhtml_branch_coverage=1 00:08:23.262 --rc genhtml_function_coverage=1 00:08:23.262 --rc genhtml_legend=1 00:08:23.262 --rc geninfo_all_blocks=1 00:08:23.262 --rc geninfo_unexecuted_blocks=1 00:08:23.262 00:08:23.262 ' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.262 --rc genhtml_branch_coverage=1 00:08:23.262 --rc genhtml_function_coverage=1 00:08:23.262 --rc genhtml_legend=1 00:08:23.262 --rc geninfo_all_blocks=1 00:08:23.262 --rc geninfo_unexecuted_blocks=1 00:08:23.262 00:08:23.262 ' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.262 --rc genhtml_branch_coverage=1 00:08:23.262 --rc genhtml_function_coverage=1 00:08:23.262 --rc genhtml_legend=1 00:08:23.262 --rc geninfo_all_blocks=1 00:08:23.262 --rc geninfo_unexecuted_blocks=1 00:08:23.262 00:08:23.262 ' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.262 11:23:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:31.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:31.413 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:31.413 Found net devices under 0000:31:00.0: cvl_0_0 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:31.413 Found net devices under 0000:31:00.1: cvl_0_1 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.413 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:08:31.414 00:08:31.414 --- 10.0.0.2 ping statistics --- 00:08:31.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.414 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:08:31.414 00:08:31.414 --- 10.0.0.1 ping statistics --- 00:08:31.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.414 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3355092 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3355092 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3355092 ']' 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.414 11:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.414 [2024-12-09 11:23:23.020607] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:31.414 [2024-12-09 11:23:23.020673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.414 [2024-12-09 11:23:23.126313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.414 [2024-12-09 11:23:23.176949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.414 [2024-12-09 11:23:23.177002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.414 [2024-12-09 11:23:23.177019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.414 [2024-12-09 11:23:23.177027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.414 [2024-12-09 11:23:23.177033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.414 [2024-12-09 11:23:23.177868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.675 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.675 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:31.675 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.675 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.675 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.936 [2024-12-09 11:23:23.885876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.936 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.937 Malloc0 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.937 [2024-12-09 11:23:23.935066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3355436 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3355436 /var/tmp/bdevperf.sock 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3355436 ']' 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.937 11:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.937 [2024-12-09 11:23:23.993226] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:08:31.937 [2024-12-09 11:23:23.993289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355436 ] 00:08:31.937 [2024-12-09 11:23:24.070567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.198 [2024-12-09 11:23:24.112596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.770 NVMe0n1 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.770 11:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.030 Running I/O for 10 seconds... 00:08:34.914 8658.00 IOPS, 33.82 MiB/s [2024-12-09T10:23:28.018Z] 9199.00 IOPS, 35.93 MiB/s [2024-12-09T10:23:29.406Z] 9887.67 IOPS, 38.62 MiB/s [2024-12-09T10:23:29.979Z] 10387.25 IOPS, 40.58 MiB/s [2024-12-09T10:23:31.366Z] 10644.20 IOPS, 41.58 MiB/s [2024-12-09T10:23:32.310Z] 10840.67 IOPS, 42.35 MiB/s [2024-12-09T10:23:33.253Z] 10964.57 IOPS, 42.83 MiB/s [2024-12-09T10:23:34.195Z] 11038.12 IOPS, 43.12 MiB/s [2024-12-09T10:23:35.139Z] 11101.22 IOPS, 43.36 MiB/s [2024-12-09T10:23:35.139Z] 11157.40 IOPS, 43.58 MiB/s 00:08:42.977 Latency(us) 00:08:42.977 [2024-12-09T10:23:35.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.977 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:42.977 Verification LBA range: start 0x0 length 0x4000 00:08:42.977 NVMe0n1 : 10.07 11170.46 43.63 0.00 0.00 91349.75 24576.00 77332.48 00:08:42.977 [2024-12-09T10:23:35.139Z] =================================================================================================================== 00:08:42.977 [2024-12-09T10:23:35.139Z] Total : 11170.46 43.63 0.00 0.00 91349.75 24576.00 77332.48 00:08:42.977 { 00:08:42.977 "results": [ 00:08:42.978 { 00:08:42.978 "job": "NVMe0n1", 00:08:42.978 "core_mask": "0x1", 00:08:42.978 "workload": "verify", 00:08:42.978 "status": "finished", 00:08:42.978 "verify_range": { 00:08:42.978 "start": 0, 00:08:42.978 "length": 16384 00:08:42.978 }, 00:08:42.978 "queue_depth": 1024, 00:08:42.978 "io_size": 4096, 00:08:42.978 "runtime": 10.070579, 00:08:42.978 "iops": 11170.460010293351, 00:08:42.978 "mibps": 43.6346094152084, 00:08:42.978 "io_failed": 0, 00:08:42.978 "io_timeout": 0, 00:08:42.978 "avg_latency_us": 91349.75204643844, 00:08:42.978 "min_latency_us": 24576.0, 00:08:42.978 "max_latency_us": 77332.48 00:08:42.978 } 00:08:42.978 ], 00:08:42.978 "core_count": 1 00:08:42.978 } 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3355436 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3355436 ']' 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3355436 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3355436 00:08:42.978 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3355436' 00:08:43.239 killing process with pid 3355436 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3355436 00:08:43.239 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.239 00:08:43.239 Latency(us) 00:08:43.239 [2024-12-09T10:23:35.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.239 [2024-12-09T10:23:35.401Z] =================================================================================================================== 00:08:43.239 [2024-12-09T10:23:35.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3355436 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.239 rmmod nvme_tcp 00:08:43.239 rmmod nvme_fabrics 00:08:43.239 rmmod nvme_keyring 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3355092 ']' 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3355092 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3355092 ']' 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3355092 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.239 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3355092 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3355092' 00:08:43.500 killing process with pid 3355092 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3355092 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3355092 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.500 11:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.049 00:08:46.049 real 0m22.436s 00:08:46.049 user 0m25.741s 00:08:46.049 sys 0m6.898s 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.049 ************************************ 00:08:46.049 END TEST nvmf_queue_depth 00:08:46.049 ************************************ 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.049 ************************************ 00:08:46.049 START TEST nvmf_target_multipath 00:08:46.049 ************************************ 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:46.049 * Looking for test storage... 00:08:46.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.049 --rc genhtml_branch_coverage=1 00:08:46.049 --rc genhtml_function_coverage=1 00:08:46.049 --rc genhtml_legend=1 00:08:46.049 --rc geninfo_all_blocks=1 00:08:46.049 --rc geninfo_unexecuted_blocks=1 00:08:46.049 00:08:46.049 ' 00:08:46.049 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.049 --rc genhtml_branch_coverage=1 00:08:46.049 --rc genhtml_function_coverage=1 00:08:46.049 --rc genhtml_legend=1 00:08:46.049 --rc geninfo_all_blocks=1 00:08:46.049 --rc geninfo_unexecuted_blocks=1 00:08:46.049 00:08:46.050 ' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.050 --rc genhtml_branch_coverage=1 00:08:46.050 --rc genhtml_function_coverage=1 00:08:46.050 --rc genhtml_legend=1 00:08:46.050 --rc geninfo_all_blocks=1 00:08:46.050 --rc geninfo_unexecuted_blocks=1 00:08:46.050 00:08:46.050 ' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.050 --rc genhtml_branch_coverage=1 00:08:46.050 --rc genhtml_function_coverage=1 00:08:46.050 --rc genhtml_legend=1 00:08:46.050 --rc geninfo_all_blocks=1 00:08:46.050 --rc geninfo_unexecuted_blocks=1 00:08:46.050 00:08:46.050 ' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.050 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:54.200 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.200 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:54.201 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:54.201 Found net devices under 0000:31:00.0: cvl_0_0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:54.201 Found net devices under 0000:31:00.1: cvl_0_1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:08:54.201 00:08:54.201 --- 10.0.0.2 ping statistics --- 00:08:54.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.201 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:08:54.201 00:08:54.201 --- 10.0.0.1 ping statistics --- 00:08:54.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.201 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:54.201 only one NIC for nvmf test 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.201 rmmod nvme_tcp 00:08:54.201 rmmod nvme_fabrics 00:08:54.201 rmmod nvme_keyring 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:54.201 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.202 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.589 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.590 00:08:55.590 real 0m9.944s 00:08:55.590 user 0m2.146s 00:08:55.590 sys 0m5.704s 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:55.590 ************************************ 00:08:55.590 END TEST nvmf_target_multipath 00:08:55.590 ************************************ 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.590 ************************************ 00:08:55.590 START TEST nvmf_zcopy 00:08:55.590 ************************************ 00:08:55.590 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:55.852 * Looking for test storage... 00:08:55.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:55.852 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.853 --rc genhtml_branch_coverage=1 00:08:55.853 --rc genhtml_function_coverage=1 00:08:55.853 --rc genhtml_legend=1 00:08:55.853 --rc geninfo_all_blocks=1 00:08:55.853 --rc geninfo_unexecuted_blocks=1 00:08:55.853 00:08:55.853 ' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.853 --rc genhtml_branch_coverage=1 00:08:55.853 --rc genhtml_function_coverage=1 00:08:55.853 --rc genhtml_legend=1 00:08:55.853 --rc geninfo_all_blocks=1 00:08:55.853 --rc geninfo_unexecuted_blocks=1 00:08:55.853 00:08:55.853 ' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.853 --rc genhtml_branch_coverage=1 00:08:55.853 --rc genhtml_function_coverage=1 00:08:55.853 --rc genhtml_legend=1 00:08:55.853 --rc geninfo_all_blocks=1 00:08:55.853 --rc geninfo_unexecuted_blocks=1 00:08:55.853 00:08:55.853 ' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.853 --rc genhtml_branch_coverage=1 00:08:55.853 --rc genhtml_function_coverage=1 00:08:55.853 --rc genhtml_legend=1 00:08:55.853 --rc geninfo_all_blocks=1 00:08:55.853 --rc geninfo_unexecuted_blocks=1 00:08:55.853 00:08:55.853 ' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.853 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.854 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:04.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:04.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:04.008 Found net devices under 0000:31:00.0: cvl_0_0 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:04.008 Found net devices under 0000:31:00.1: cvl_0_1 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.008 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:09:04.009 00:09:04.009 --- 10.0.0.2 ping statistics --- 00:09:04.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.009 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:09:04.009 00:09:04.009 --- 10.0.0.1 ping statistics --- 00:09:04.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.009 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3366263 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3366263 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3366263 ']' 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.009 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.009 [2024-12-09 11:23:55.549597] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:09:04.009 [2024-12-09 11:23:55.549645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.009 [2024-12-09 11:23:55.646033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.009 [2024-12-09 11:23:55.686566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.009 [2024-12-09 11:23:55.686609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.009 [2024-12-09 11:23:55.686617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.009 [2024-12-09 11:23:55.686629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.009 [2024-12-09 11:23:55.686635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.009 [2024-12-09 11:23:55.687347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.272 [2024-12-09 11:23:56.405904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.272 [2024-12-09 11:23:56.422246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.272 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.534 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.534 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:04.534 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.534 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.535 malloc0 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.535 { 00:09:04.535 "params": { 00:09:04.535 "name": "Nvme$subsystem", 00:09:04.535 "trtype": "$TEST_TRANSPORT", 00:09:04.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.535 "adrfam": "ipv4", 00:09:04.535 "trsvcid": "$NVMF_PORT", 00:09:04.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.535 "hdgst": ${hdgst:-false}, 00:09:04.535 "ddgst": ${ddgst:-false} 00:09:04.535 }, 00:09:04.535 "method": "bdev_nvme_attach_controller" 00:09:04.535 } 00:09:04.535 EOF 00:09:04.535 )") 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:04.535 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.535 "params": { 00:09:04.535 "name": "Nvme1", 00:09:04.535 "trtype": "tcp", 00:09:04.535 "traddr": "10.0.0.2", 00:09:04.535 "adrfam": "ipv4", 00:09:04.535 "trsvcid": "4420", 00:09:04.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.535 "hdgst": false, 00:09:04.535 "ddgst": false 00:09:04.535 }, 00:09:04.535 "method": "bdev_nvme_attach_controller" 00:09:04.535 }' 00:09:04.535 [2024-12-09 11:23:56.509490] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:09:04.535 [2024-12-09 11:23:56.509557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366535 ] 00:09:04.535 [2024-12-09 11:23:56.587939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.535 [2024-12-09 11:23:56.629849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.797 Running I/O for 10 seconds... 00:09:07.129 6698.00 IOPS, 52.33 MiB/s [2024-12-09T10:24:00.236Z] 6736.50 IOPS, 52.63 MiB/s [2024-12-09T10:24:01.183Z] 7600.00 IOPS, 59.38 MiB/s [2024-12-09T10:24:02.125Z] 8145.50 IOPS, 63.64 MiB/s [2024-12-09T10:24:03.066Z] 8478.20 IOPS, 66.24 MiB/s [2024-12-09T10:24:04.011Z] 8702.33 IOPS, 67.99 MiB/s [2024-12-09T10:24:05.398Z] 8860.00 IOPS, 69.22 MiB/s [2024-12-09T10:24:06.339Z] 8979.25 IOPS, 70.15 MiB/s [2024-12-09T10:24:07.281Z] 9074.33 IOPS, 70.89 MiB/s [2024-12-09T10:24:07.281Z] 9148.70 IOPS, 71.47 MiB/s 00:09:15.119 Latency(us) 00:09:15.119 [2024-12-09T10:24:07.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.119 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:15.119 Verification LBA range: start 0x0 length 0x1000 00:09:15.119 Nvme1n1 : 10.01 9150.37 71.49 0.00 0.00 13936.89 1952.43 26869.76 00:09:15.119 [2024-12-09T10:24:07.281Z] =================================================================================================================== 00:09:15.119 [2024-12-09T10:24:07.281Z] Total : 9150.37 71.49 0.00 0.00 13936.89 1952.43 26869.76 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3368736 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.119 { 00:09:15.119 "params": { 00:09:15.119 "name": "Nvme$subsystem", 00:09:15.119 "trtype": "$TEST_TRANSPORT", 00:09:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.119 "adrfam": "ipv4", 00:09:15.119 "trsvcid": "$NVMF_PORT", 00:09:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.119 "hdgst": ${hdgst:-false}, 00:09:15.119 "ddgst": ${ddgst:-false} 00:09:15.119 }, 00:09:15.119 "method": "bdev_nvme_attach_controller" 00:09:15.119 } 00:09:15.119 EOF 00:09:15.119 )") 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:15.119 [2024-12-09 11:24:07.100778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.119 [2024-12-09 11:24:07.100810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:15.119 [2024-12-09 11:24:07.108758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.119 [2024-12-09 11:24:07.108766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:15.119 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.119 "params": { 00:09:15.119 "name": "Nvme1", 00:09:15.119 "trtype": "tcp", 00:09:15.119 "traddr": "10.0.0.2", 00:09:15.119 "adrfam": "ipv4", 00:09:15.119 "trsvcid": "4420", 00:09:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.119 "hdgst": false, 00:09:15.119 "ddgst": false 00:09:15.119 }, 00:09:15.119 "method": "bdev_nvme_attach_controller" 00:09:15.119 }' 00:09:15.120 [2024-12-09 11:24:07.116777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.116784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.124796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.124803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.126301] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:09:15.120 [2024-12-09 11:24:07.126347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368736 ] 00:09:15.120 [2024-12-09 11:24:07.132817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.132825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.144847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.144854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.152868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.152876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.160888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.160896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.168908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.168915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.176928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.176935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.184948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.184955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.192968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.192975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.197063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.120 [2024-12-09 11:24:07.200988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.200995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.209013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.209021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.217032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.217039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.225052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.225060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.232638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.120 [2024-12-09 11:24:07.233069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.233076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.241089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.241096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.249116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.249125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.257133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.257142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.265152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.265162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.120 [2024-12-09 11:24:07.273170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.120 [2024-12-09 11:24:07.273178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.380 [2024-12-09 11:24:07.281192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.281203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.289213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.289221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.297231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.297238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.305252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.305259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.313287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.313304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.321300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.321311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.329316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.329325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.337337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.337346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.345356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.345369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.353377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.353386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.361399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.361406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.369421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.369427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.377442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.377449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.385463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.385469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.393485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.393494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.401502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.401509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.409521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.409528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.417542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.417549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.425563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.425570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.433586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.433593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.441607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.441615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.449626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.449632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.457645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.457652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.465666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.465673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.473688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.473695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.481710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.481718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.489732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.489739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.497764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.497789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 Running I/O for 5 seconds... 00:09:15.381 [2024-12-09 11:24:07.505777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.505784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.516259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.516274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.524326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.524340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.381 [2024-12-09 11:24:07.533027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.381 [2024-12-09 11:24:07.533043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.542116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.542132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.551038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.551053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.560005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.560026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.568798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.568813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.577207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.577222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.586100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.586115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.594859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.594874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.603783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.603797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.612686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.612701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.621426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.621440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.630467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.630481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.638945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.638959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.647834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.647848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.656386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.656400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.664979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.664997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.673786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.673801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.682984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.682998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.691506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.691520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.699933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.699947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.709127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.709142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.718238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.718253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.726893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.726907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.735356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.735371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.744358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.744372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.752832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.752846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.762047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.762061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.770020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.770034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.778717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.778732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.787250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.787265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.641 [2024-12-09 11:24:07.795846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.641 [2024-12-09 11:24:07.795860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.804747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.804762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.813955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.813970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.822831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.822846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.831625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.831640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.840160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.840175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.849119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.849133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.857523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.857536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.866233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.866247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.875598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.875613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.884073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.884088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.892890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.892904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.901630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.901644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.910699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.910714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.919958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.919973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.928601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.928616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.937678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.937693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.946123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.946138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.954866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.954881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.963694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.963708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.972412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.972426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.981197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.981213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.990217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.990231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:07.999450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:07.999464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.008462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.008477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.017565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.017580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.026076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.026090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.034650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.034664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.043707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.043722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.052317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.052331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.903 [2024-12-09 11:24:08.061017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.903 [2024-12-09 11:24:08.061032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.069850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.069864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.078494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.078508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.086615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.086630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.095036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.095050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.103786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.103800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.112545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.112560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.121774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.121789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.130273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.130288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.139321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.139336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.147905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.147920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.156241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.156255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.165083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.165098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.173754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.173769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.182651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.182666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.191557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.191572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.200697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.200712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.209422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.209437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.218234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.218249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.227049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.227064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.235718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.235734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.244640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.244655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.252787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.252802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.261747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.261761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.270375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.270390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.279085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.279099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.287632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.287646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.295883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.295897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.304612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.304626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.313434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.313448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.166 [2024-12-09 11:24:08.321393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.166 [2024-12-09 11:24:08.321411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.330506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.330521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.338387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.338401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.347385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.347400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.356503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.356518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.365276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.365290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.373313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.373328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.382118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.382132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.390367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.390381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.398675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.398690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.407392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.407407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.416500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.416515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.425563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.425577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.434628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.434642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.443036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.443050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.452145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.452160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.460816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.460831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.469435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.469449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.478159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.478174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.486631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.486650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.495646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.495660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.504717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.504731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 19073.00 IOPS, 149.01 MiB/s [2024-12-09T10:24:08.588Z] [2024-12-09 11:24:08.513396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.513410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.522025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.522039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.531092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.531106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.539831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.539845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.548367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.548381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.557698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.426 [2024-12-09 11:24:08.557712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.426 [2024-12-09 11:24:08.565653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.427 [2024-12-09 11:24:08.565667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.427 [2024-12-09 11:24:08.574801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.427 [2024-12-09 11:24:08.574815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.427 [2024-12-09 11:24:08.582616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.427 [2024-12-09 11:24:08.582630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.591723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.591738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.600241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.600255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.609618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.609633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.618416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.618431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.627191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.627205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.636039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.636053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.644329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.644343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.652981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.652999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.661577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.661591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.670217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.670231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.678745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.678759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.687680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.687694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.696260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.696274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.704984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.704998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.718134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.718149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.726235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.726249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.734706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.734720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.743758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.743772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.752732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.752747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.761815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.761829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.770920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.770934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.779459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.779473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.788287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.788301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.796402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.796417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.804755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.804769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.813848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.813862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.822336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.822351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.830987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.831001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.687 [2024-12-09 11:24:08.839879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.687 [2024-12-09 11:24:08.839893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.848659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.848674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.856910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.856924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.865720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.865734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.874482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.874496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.883310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.883324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.891810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.891824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.900887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.900901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.909397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.909412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.918453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.918467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.927375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.927389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.936194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.936209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.945097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.945111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.953719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.953733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.962550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.962566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.970584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.970599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.979552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.979567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.988137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.988152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:08.996808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:08.996822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:09.005627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.948 [2024-12-09 11:24:09.005641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.948 [2024-12-09 11:24:09.014624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.014638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.023295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.023308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.031740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.031755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.040505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.040520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.049080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.049095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.058119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.058133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.067225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.067240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.075906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.075920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.084653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.084667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.093355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.093369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.949 [2024-12-09 11:24:09.102159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.949 [2024-12-09 11:24:09.102173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.110652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.110666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.119755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.119769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.128256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.128270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.137291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.137305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.145738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.145752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.154673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.154687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.163180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.163194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.172330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.172344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.180325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.180339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.189736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.189751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.198195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.198209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.207064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.207078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.215627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.215641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.224347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.224361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.232932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.232947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.241205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.241219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.250265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.250280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.259199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.259214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.268140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.268154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.277102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.277117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.285758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.285773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.293570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.293585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.209 [2024-12-09 11:24:09.302344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.209 [2024-12-09 11:24:09.302358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.310850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.310864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.319629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.319643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.328479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.328493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.337050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.337064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.345752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.345767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.354431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.354446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.210 [2024-12-09 11:24:09.363224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.210 [2024-12-09 11:24:09.363238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.371723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.371738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.380009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.380028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.388782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.388796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.397120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.397135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.405587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.405601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.414667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.414681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.423254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.423268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.432174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.432188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.440844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.440858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.449844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.449858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.458294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.458308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.466711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.466725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.475551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.475569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.484391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.484406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.493185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.493199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.501954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.501968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.510432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.510447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 19167.50 IOPS, 149.75 MiB/s [2024-12-09T10:24:09.634Z] [2024-12-09 11:24:09.518911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.518925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.527718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.527733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.536210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.536225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.544062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.544076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.552780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.552795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.561018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.561033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.569949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.569963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.578770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.578784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.586854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.586868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.595777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.595792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.604167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.604182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.613328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.613343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.621786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.621800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.472 [2024-12-09 11:24:09.630840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.472 [2024-12-09 11:24:09.630854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.639531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.639550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.648391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.648407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.657486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.657501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.666458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.666473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.675044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.675059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.683942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.683957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.691954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.691969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.700912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.700926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.709036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.709050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.717841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.717855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.726999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.727018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.735436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.735451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.743788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.743803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.753053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.753068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.761828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.761843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.770433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.770448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.779529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.779543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.788556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.788571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.797513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.797528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.805971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.805990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.814893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.814908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.823948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.823962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.832743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.832758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.841546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.841560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.850127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.850142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.858739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.858754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.867565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.867580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.876511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.876525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.734 [2024-12-09 11:24:09.885025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.734 [2024-12-09 11:24:09.885039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.995 [2024-12-09 11:24:09.893895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.995 [2024-12-09 11:24:09.893910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.995 [2024-12-09 11:24:09.902451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.902467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.911316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.911330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.920464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.920479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.928979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.928994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.937539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.937553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.946632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.946647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.955037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.955052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.963949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.963963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.972609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.972623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.981645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.981660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.989901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.989916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:09.998546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:09.998560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.012284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.012299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.020234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.020248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.028884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.028899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.037572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.037586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.045901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.045916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.054663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.054678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.062990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.063005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.071841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.071856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.080283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.080298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.088944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.088958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.096874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.096888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.106210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.106225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.114073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.114089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.123422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.123437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.131241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.131255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.140357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.140372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.996 [2024-12-09 11:24:10.149732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.996 [2024-12-09 11:24:10.149747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.158071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.158086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.166842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.166856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.175614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.175628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.184625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.184639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.193072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.193086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.201764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.201779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.210776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.210790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.219521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.219535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.228339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.228353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.256 [2024-12-09 11:24:10.237157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.256 [2024-12-09 11:24:10.237171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.246167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.246181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.255225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.255240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.263906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.263920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.273031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.273045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.280959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.280973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.290452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.290466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.299292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.299306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.307784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.307798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.316583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.316596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.325726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.325740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.334711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.334726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.343944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.343958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.352562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.352576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.361277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.361292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.370028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.370042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.378325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.378340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.387373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.387387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.396291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.396305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.405203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.405217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.257 [2024-12-09 11:24:10.413956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.257 [2024-12-09 11:24:10.413970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.422801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.422816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.431563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.431577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.440440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.440454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.448710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.448724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.457744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.457758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.466144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.466159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.475420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.475434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.484049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.484063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.493449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.493462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.502243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.502256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.511122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.511136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 19176.33 IOPS, 149.82 MiB/s [2024-12-09T10:24:10.680Z] [2024-12-09 11:24:10.519182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.519196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.527905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.527919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.536280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.536294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.545020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.545034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.553996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.554014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.562396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.562409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.571072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.571086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.578949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.578963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.588083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.588097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.597112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.597126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.606196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.606210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.614934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.614948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.623517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.623531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.632243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.632264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.641230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.641245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.649566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.649581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.658447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.658462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.667356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.667370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.518 [2024-12-09 11:24:10.676307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.518 [2024-12-09 11:24:10.676321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.685050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.685065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.693972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.693986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.702761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.702776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.711460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.711475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.719499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.719513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.728216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.728230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.736913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.736927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.745897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.745912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.755055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.755069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.763097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.763110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.772093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.772107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.780999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.781019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.790059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.790073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.799173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.799191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.808057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.808071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.816917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.816931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.825792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.825806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.834988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.835002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.843342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.843356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.852079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.852093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.860431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.860445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.869380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.869393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.878660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.878674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.887216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.887231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.895997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.896016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.904693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.904707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.913527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.913541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.922503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.922517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.931089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.931103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.780 [2024-12-09 11:24:10.939242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.780 [2024-12-09 11:24:10.939256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.948054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.948068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.956785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.956799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.965675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.965693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.974925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.974939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.983934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.983948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:10.992615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:10.992629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.001737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.001750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.010555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.010569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.019129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.019143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.028033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.028047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.036873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.036887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.045881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.045896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.054903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.054917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.063751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.063766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.072890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.072904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.081438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.081452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.090134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.090148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.099460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.099474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.108604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.108618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.117308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.117321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.125280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.125294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.134357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.134371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.143409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.143423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.152061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.152076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.160886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.160902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.169551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.169565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.178556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.178570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.187611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.187626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.042 [2024-12-09 11:24:11.196182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.042 [2024-12-09 11:24:11.196197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.205080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.205095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.214111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.214126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.222679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.222694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.231878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.231893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.240812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.240827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.249614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.249628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.258771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.258786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.267449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.267463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.276284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.276299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.284756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.284770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.293642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.293656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.302718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.302732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.311402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.311416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.320947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.320962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.328989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.329003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.337840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.337854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.346353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.346367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.355181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.355196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.368950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.368965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.377490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.377505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.386146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.386160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.394386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.394401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.403261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.403276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.412024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.412038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.420279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.420294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.429458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.429473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.438506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.438521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.447681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.447696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.304 [2024-12-09 11:24:11.456751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.304 [2024-12-09 11:24:11.456766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.465181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.465197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.473881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.473895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.482721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.482735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.491963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.491977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.500752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.500766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.509848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.509862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 19206.50 IOPS, 150.05 MiB/s [2024-12-09T10:24:11.727Z] [2024-12-09 11:24:11.519108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.519122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.528244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.528259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.536881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.536895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.545689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.545704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.554781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.554795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.563427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.563441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.572100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.572115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.581164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.581179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.589611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.589626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.598160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.598174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.607386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.607401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.615896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.615911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.624652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.624667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.633262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.633280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.641801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.641816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.650488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.650502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.659068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.659082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.565 [2024-12-09 11:24:11.668105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.565 [2024-12-09 11:24:11.668119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.676579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.676594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.685405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.685420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.694143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.694158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.702800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.702814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.711901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.711915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.566 [2024-12-09 11:24:11.720882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.566 [2024-12-09 11:24:11.720897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.826 [2024-12-09 11:24:11.729723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.729738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.738434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.738449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.747058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.747073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.755952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.755967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.764726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.764741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.773149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.773163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.781947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.781961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.790908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.790923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.799930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.799949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.808371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.808385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.817600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.817615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.826345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.826359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.835392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.835406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.844228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.844242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.852572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.852586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.861534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.861549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.870256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.870270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.879201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.879216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.887843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.887857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.896453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.896467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.905197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.905211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.914263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.914277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.922852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.922866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.931447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.931462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.940220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.940234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.948933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.948947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.958067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.958081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.966861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.966879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.975788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.975802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.827 [2024-12-09 11:24:11.984574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.827 [2024-12-09 11:24:11.984589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:11.993288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:11.993303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.002276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.002290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.011185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.011200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.019713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.019727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.028599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.028613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.037499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.037514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.046066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.046080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.055289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.055303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.064365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.064378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.073382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.073396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.082087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.082101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.090843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.090856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.099769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.099783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.088 [2024-12-09 11:24:12.108556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.088 [2024-12-09 11:24:12.108570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.117756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.117771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.125923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.125937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.135343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.135361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.143908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.143922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.152851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.152865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.161297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.161311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.169440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.169454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.178385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.178399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.187077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.187091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.195774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.195788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.204342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.204356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.212927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.212941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.221811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.221825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.230225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.230239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.238952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.238966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.089 [2024-12-09 11:24:12.247735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.089 [2024-12-09 11:24:12.247749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.256580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.256595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.265078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.265092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.273895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.273909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.283237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.283251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.291819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.291833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.300873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.300887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.309474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.309488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.318319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.318333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.326779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.326793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.335399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.335414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.344272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.344286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.352623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.352637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.361438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.361453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.370292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.370306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.379210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.379225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.388029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.388043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.396606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.396620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.405371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.405385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.414048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.414061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.422872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.422887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.431906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.431920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.440449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.440463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.449669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.449683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.457661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.457675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.467079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.467093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.475849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.475862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.484760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.484774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.493422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.493436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.350 [2024-12-09 11:24:12.502336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.350 [2024-12-09 11:24:12.502350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.511724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.511738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 19213.60 IOPS, 150.11 MiB/s [2024-12-09T10:24:12.773Z] [2024-12-09 11:24:12.520466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.520480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.526166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.526179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 00:09:20.611 Latency(us) 00:09:20.611 [2024-12-09T10:24:12.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.611 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:20.611 Nvme1n1 : 5.01 19213.98 150.11 0.00 0.00 6655.14 2566.83 15291.73 00:09:20.611 [2024-12-09T10:24:12.773Z] =================================================================================================================== 00:09:20.611 [2024-12-09T10:24:12.773Z] Total : 19213.98 150.11 0.00 0.00 6655.14 2566.83 15291.73 00:09:20.611 [2024-12-09 11:24:12.534185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.534195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.542212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.542222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.550228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.550239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.558249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.558258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.566269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.566278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.574287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.574296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.582308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.582317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.590328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.590342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.598348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.598355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.606369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.606377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.614388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.614396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.622411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.622421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.630429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.630436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 [2024-12-09 11:24:12.638449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.611 [2024-12-09 11:24:12.638456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3368736) - No such process 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3368736 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 delay0 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.611 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:20.872 [2024-12-09 11:24:12.806197] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:29.020 Initializing NVMe Controllers 00:09:29.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:29.020 Initialization complete. Launching workers. 00:09:29.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 302, failed: 12852 00:09:29.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13093, failed to submit 61 00:09:29.020 success 12948, unsuccessful 145, failed 0 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.020 rmmod nvme_tcp 00:09:29.020 rmmod nvme_fabrics 00:09:29.020 rmmod nvme_keyring 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:29.020 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3366263 ']' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3366263 ']' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3366263' 00:09:29.021 killing process with pid 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3366263 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.021 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.965 00:09:29.965 real 0m34.331s 00:09:29.965 user 0m46.142s 00:09:29.965 sys 0m11.476s 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.965 ************************************ 00:09:29.965 END TEST nvmf_zcopy 00:09:29.965 ************************************ 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.965 ************************************ 00:09:29.965 START TEST nvmf_nmic 00:09:29.965 ************************************ 00:09:29.965 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:30.227 * Looking for test storage... 00:09:30.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.227 --rc genhtml_branch_coverage=1 00:09:30.227 --rc genhtml_function_coverage=1 00:09:30.227 --rc genhtml_legend=1 00:09:30.227 --rc geninfo_all_blocks=1 00:09:30.227 --rc geninfo_unexecuted_blocks=1 00:09:30.227 00:09:30.227 ' 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.227 --rc genhtml_branch_coverage=1 00:09:30.227 --rc genhtml_function_coverage=1 00:09:30.227 --rc genhtml_legend=1 00:09:30.227 --rc geninfo_all_blocks=1 00:09:30.227 --rc geninfo_unexecuted_blocks=1 00:09:30.227 00:09:30.227 ' 00:09:30.227 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.227 --rc genhtml_branch_coverage=1 00:09:30.227 --rc genhtml_function_coverage=1 00:09:30.227 --rc genhtml_legend=1 00:09:30.227 --rc geninfo_all_blocks=1 00:09:30.228 --rc geninfo_unexecuted_blocks=1 00:09:30.228 00:09:30.228 ' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.228 --rc genhtml_branch_coverage=1 00:09:30.228 --rc genhtml_function_coverage=1 00:09:30.228 --rc genhtml_legend=1 00:09:30.228 --rc geninfo_all_blocks=1 00:09:30.228 --rc geninfo_unexecuted_blocks=1 00:09:30.228 00:09:30.228 ' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.228 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.375 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:38.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:38.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:38.376 Found net devices under 0000:31:00.0: cvl_0_0 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:38.376 Found net devices under 0000:31:00.1: cvl_0_1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:09:38.376 00:09:38.376 --- 10.0.0.2 ping statistics --- 00:09:38.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.376 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:38.376 00:09:38.376 --- 10.0.0.1 ping statistics --- 00:09:38.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.376 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3375940 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3375940 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3375940 ']' 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.376 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.376 [2024-12-09 11:24:29.965852] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:09:38.376 [2024-12-09 11:24:29.965918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.376 [2024-12-09 11:24:30.052882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.376 [2024-12-09 11:24:30.097762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.376 [2024-12-09 11:24:30.097803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.376 [2024-12-09 11:24:30.097811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.376 [2024-12-09 11:24:30.097818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.376 [2024-12-09 11:24:30.097824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.376 [2024-12-09 11:24:30.099610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.376 [2024-12-09 11:24:30.099731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.376 [2024-12-09 11:24:30.099894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.376 [2024-12-09 11:24:30.099895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.638 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.638 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:38.638 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.638 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.638 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 [2024-12-09 11:24:30.822281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 Malloc0 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.900 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.900 [2024-12-09 11:24:30.892399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:38.901 test case1: single bdev can't be used in multiple subsystems 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.901 [2024-12-09 11:24:30.928297] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:38.901 [2024-12-09 11:24:30.928316] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:38.901 [2024-12-09 11:24:30.928324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.901 request: 00:09:38.901 { 00:09:38.901 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:38.901 "namespace": { 00:09:38.901 "bdev_name": "Malloc0", 00:09:38.901 "no_auto_visible": false, 00:09:38.901 "hide_metadata": false 00:09:38.901 }, 00:09:38.901 "method": "nvmf_subsystem_add_ns", 00:09:38.901 "req_id": 1 00:09:38.901 } 00:09:38.901 Got JSON-RPC error response 00:09:38.901 response: 00:09:38.901 { 00:09:38.901 "code": -32602, 00:09:38.901 "message": "Invalid parameters" 00:09:38.901 } 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:38.901 Adding namespace failed - expected result. 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:38.901 test case2: host connect to nvmf target in multiple paths 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.901 [2024-12-09 11:24:30.940441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.901 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.285 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:42.201 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.201 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:42.201 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.201 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:42.201 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:44.125 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:44.125 [global] 00:09:44.125 thread=1 00:09:44.125 invalidate=1 00:09:44.125 rw=write 00:09:44.125 time_based=1 00:09:44.125 runtime=1 00:09:44.125 ioengine=libaio 00:09:44.125 direct=1 00:09:44.125 bs=4096 00:09:44.125 iodepth=1 00:09:44.125 norandommap=0 00:09:44.125 numjobs=1 00:09:44.125 00:09:44.125 verify_dump=1 00:09:44.125 verify_backlog=512 00:09:44.125 verify_state_save=0 00:09:44.125 do_verify=1 00:09:44.125 verify=crc32c-intel 00:09:44.125 [job0] 00:09:44.125 filename=/dev/nvme0n1 00:09:44.125 Could not set queue depth (nvme0n1) 00:09:44.386 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.386 fio-3.35 00:09:44.386 Starting 1 thread 00:09:45.330 00:09:45.330 job0: (groupid=0, jobs=1): err= 0: pid=3377483: Mon Dec 9 11:24:37 2024 00:09:45.330 read: IOPS=17, BW=71.9KiB/s (73.7kB/s)(72.0KiB/1001msec) 00:09:45.330 slat (nsec): min=26698, max=27469, avg=26996.50, stdev=229.52 00:09:45.330 clat (usec): min=40886, max=42003, avg=41352.17, stdev=501.32 00:09:45.330 lat (usec): min=40912, max=42030, avg=41379.16, stdev=501.36 00:09:45.330 clat percentiles (usec): 00:09:45.330 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:45.330 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:45.330 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:45.330 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:45.330 | 99.99th=[42206] 00:09:45.330 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:45.330 slat (usec): min=9, max=29548, avg=85.51, stdev=1304.70 00:09:45.330 clat (usec): min=227, max=548, avg=407.05, stdev=65.25 00:09:45.330 lat (usec): min=238, max=29898, avg=492.56, stdev=1304.07 00:09:45.330 clat percentiles (usec): 00:09:45.330 | 1.00th=[ 239], 5.00th=[ 262], 10.00th=[ 326], 20.00th=[ 347], 00:09:45.330 | 30.00th=[ 363], 40.00th=[ 420], 50.00th=[ 433], 60.00th=[ 441], 00:09:45.330 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 474], 95.00th=[ 482], 00:09:45.330 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 545], 00:09:45.330 | 99.99th=[ 545] 00:09:45.330 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.330 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.330 lat (usec) : 250=3.02%, 500=91.32%, 750=2.26% 00:09:45.330 lat (msec) : 50=3.40% 00:09:45.330 cpu : usr=1.00%, sys=1.10%, ctx=534, majf=0, minf=1 00:09:45.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.330 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.330 00:09:45.330 Run status group 0 (all jobs): 00:09:45.330 READ: bw=71.9KiB/s (73.7kB/s), 71.9KiB/s-71.9KiB/s (73.7kB/s-73.7kB/s), io=72.0KiB (73.7kB), run=1001-1001msec 00:09:45.330 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:45.330 00:09:45.330 Disk stats (read/write): 00:09:45.330 nvme0n1: ios=40/512, merge=0/0, ticks=1592/201, in_queue=1793, util=98.90% 00:09:45.330 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.592 rmmod nvme_tcp 00:09:45.592 rmmod nvme_fabrics 00:09:45.592 rmmod nvme_keyring 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3375940 ']' 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3375940 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3375940 ']' 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3375940 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.592 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3375940 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3375940' 00:09:45.855 killing process with pid 3375940 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3375940 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3375940 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.855 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.404 00:09:48.404 real 0m17.913s 00:09:48.404 user 0m47.823s 00:09:48.404 sys 0m6.595s 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.404 ************************************ 00:09:48.404 END TEST nvmf_nmic 00:09:48.404 ************************************ 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.404 ************************************ 00:09:48.404 START TEST nvmf_fio_target 00:09:48.404 ************************************ 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:48.404 * Looking for test storage... 00:09:48.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.404 --rc genhtml_branch_coverage=1 00:09:48.404 --rc genhtml_function_coverage=1 00:09:48.404 --rc genhtml_legend=1 00:09:48.404 --rc geninfo_all_blocks=1 00:09:48.404 --rc geninfo_unexecuted_blocks=1 00:09:48.404 00:09:48.404 ' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.404 --rc genhtml_branch_coverage=1 00:09:48.404 --rc genhtml_function_coverage=1 00:09:48.404 --rc genhtml_legend=1 00:09:48.404 --rc geninfo_all_blocks=1 00:09:48.404 --rc geninfo_unexecuted_blocks=1 00:09:48.404 00:09:48.404 ' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.404 --rc genhtml_branch_coverage=1 00:09:48.404 --rc genhtml_function_coverage=1 00:09:48.404 --rc genhtml_legend=1 00:09:48.404 --rc geninfo_all_blocks=1 00:09:48.404 --rc geninfo_unexecuted_blocks=1 00:09:48.404 00:09:48.404 ' 00:09:48.404 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.405 --rc genhtml_branch_coverage=1 00:09:48.405 --rc genhtml_function_coverage=1 00:09:48.405 --rc genhtml_legend=1 00:09:48.405 --rc geninfo_all_blocks=1 00:09:48.405 --rc geninfo_unexecuted_blocks=1 00:09:48.405 00:09:48.405 ' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.405 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:56.548 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:56.548 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:56.548 Found net devices under 0000:31:00.0: cvl_0_0 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:56.548 Found net devices under 0000:31:00.1: cvl_0_1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.548 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:09:56.549 00:09:56.549 --- 10.0.0.2 ping statistics --- 00:09:56.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.549 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:56.549 00:09:56.549 --- 10.0.0.1 ping statistics --- 00:09:56.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.549 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3382119 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3382119 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3382119 ']' 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.549 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.549 [2024-12-09 11:24:47.748814] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:09:56.549 [2024-12-09 11:24:47.748864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.549 [2024-12-09 11:24:47.826676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.549 [2024-12-09 11:24:47.862673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.549 [2024-12-09 11:24:47.862707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.549 [2024-12-09 11:24:47.862716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.549 [2024-12-09 11:24:47.862722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.549 [2024-12-09 11:24:47.862728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.549 [2024-12-09 11:24:47.864288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.549 [2024-12-09 11:24:47.864411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.549 [2024-12-09 11:24:47.864593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.549 [2024-12-09 11:24:47.864594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.549 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:56.810 [2024-12-09 11:24:48.744476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.810 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.071 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:57.071 11:24:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.071 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:57.071 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.331 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:57.331 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.591 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:57.591 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:57.851 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.851 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:57.851 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.112 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:58.112 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.372 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:58.372 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:58.372 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.633 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:58.633 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.894 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:58.894 11:24:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.156 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.156 [2024-12-09 11:24:51.222036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.156 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:59.417 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:59.678 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:01.062 11:24:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:03.607 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:03.607 [global] 00:10:03.608 thread=1 00:10:03.608 invalidate=1 00:10:03.608 rw=write 00:10:03.608 time_based=1 00:10:03.608 runtime=1 00:10:03.608 ioengine=libaio 00:10:03.608 direct=1 00:10:03.608 bs=4096 00:10:03.608 iodepth=1 00:10:03.608 norandommap=0 00:10:03.608 numjobs=1 00:10:03.608 00:10:03.608 verify_dump=1 00:10:03.608 verify_backlog=512 00:10:03.608 verify_state_save=0 00:10:03.608 do_verify=1 00:10:03.608 verify=crc32c-intel 00:10:03.608 [job0] 00:10:03.608 filename=/dev/nvme0n1 00:10:03.608 [job1] 00:10:03.608 filename=/dev/nvme0n2 00:10:03.608 [job2] 00:10:03.608 filename=/dev/nvme0n3 00:10:03.608 [job3] 00:10:03.608 filename=/dev/nvme0n4 00:10:03.608 Could not set queue depth (nvme0n1) 00:10:03.608 Could not set queue depth (nvme0n2) 00:10:03.608 Could not set queue depth (nvme0n3) 00:10:03.608 Could not set queue depth (nvme0n4) 00:10:03.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.608 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.608 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.608 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.608 fio-3.35 00:10:03.608 Starting 4 threads 00:10:04.997 00:10:04.997 job0: (groupid=0, jobs=1): err= 0: pid=3383819: Mon Dec 9 11:24:56 2024 00:10:04.997 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:10:04.997 slat (nsec): min=26269, max=27347, avg=26713.11, stdev=344.49 00:10:04.997 clat (usec): min=1049, max=42077, avg=39519.53, stdev=9608.29 00:10:04.997 lat (usec): min=1077, max=42104, avg=39546.25, stdev=9608.12 00:10:04.997 clat percentiles (usec): 00:10:04.997 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41157], 00:10:04.997 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:04.997 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:04.997 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.997 | 99.99th=[42206] 00:10:04.997 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:04.997 slat (nsec): min=2992, max=68460, avg=28378.80, stdev=11242.08 00:10:04.997 clat (usec): min=151, max=1227, avg=573.42, stdev=142.28 00:10:04.997 lat (usec): min=162, max=1239, avg=601.80, stdev=145.95 00:10:04.997 clat percentiles (usec): 00:10:04.997 | 1.00th=[ 269], 5.00th=[ 351], 10.00th=[ 392], 20.00th=[ 449], 00:10:04.997 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 619], 00:10:04.997 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:10:04.997 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:04.997 | 99.99th=[ 1221] 00:10:04.997 bw ( KiB/s): min= 4096, max= 4096, per=43.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.997 lat (usec) : 250=0.94%, 500=29.43%, 750=57.92%, 1000=7.92% 00:10:04.997 lat (msec) : 2=0.57%, 50=3.21% 00:10:04.997 cpu : usr=0.49%, sys=2.15%, ctx=531, majf=0, minf=1 00:10:04.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.997 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.997 job1: (groupid=0, jobs=1): err= 0: pid=3383820: Mon Dec 9 11:24:56 2024 00:10:04.997 read: IOPS=18, BW=73.2KiB/s (75.0kB/s)(76.0KiB/1038msec) 00:10:04.997 slat (nsec): min=25880, max=28100, avg=26427.37, stdev=504.89 00:10:04.997 clat (usec): min=1071, max=42071, avg=37624.54, stdev=12852.53 00:10:04.997 lat (usec): min=1098, max=42097, avg=37650.96, stdev=12852.53 00:10:04.997 clat percentiles (usec): 00:10:04.997 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[ 1237], 20.00th=[41681], 00:10:04.997 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:04.997 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:04.997 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.997 | 99.99th=[42206] 00:10:04.997 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:04.997 slat (nsec): min=9963, max=55081, avg=30385.26, stdev=10440.09 00:10:04.997 clat (usec): min=219, max=813, avg=588.70, stdev=116.37 00:10:04.997 lat (usec): min=253, max=848, avg=619.09, stdev=121.00 00:10:04.997 clat percentiles (usec): 00:10:04.997 | 1.00th=[ 265], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 490], 00:10:04.997 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:10:04.997 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 758], 00:10:04.997 | 99.00th=[ 807], 99.50th=[ 807], 99.90th=[ 816], 99.95th=[ 816], 00:10:04.997 | 99.99th=[ 816] 00:10:04.997 bw ( KiB/s): min= 4096, max= 4096, per=43.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.997 lat (usec) : 250=0.38%, 500=22.22%, 750=67.98%, 1000=5.84% 00:10:04.997 lat (msec) : 2=0.38%, 50=3.20% 00:10:04.997 cpu : usr=0.96%, sys=1.25%, ctx=534, majf=0, minf=1 00:10:04.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.997 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.997 job2: (groupid=0, jobs=1): err= 0: pid=3383821: Mon Dec 9 11:24:56 2024 00:10:04.997 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:04.997 slat (nsec): min=8160, max=44860, avg=26747.45, stdev=3310.38 00:10:04.997 clat (usec): min=706, max=1242, avg=1002.51, stdev=77.13 00:10:04.997 lat (usec): min=714, max=1267, avg=1029.25, stdev=77.76 00:10:04.997 clat percentiles (usec): 00:10:04.997 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 947], 00:10:04.997 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:10:04.997 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:10:04.997 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:04.998 | 99.99th=[ 1237] 00:10:04.998 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:10:04.998 slat (nsec): min=10268, max=83885, avg=31456.92, stdev=9724.36 00:10:04.998 clat (usec): min=224, max=995, avg=602.88, stdev=112.98 00:10:04.998 lat (usec): min=234, max=1030, avg=634.33, stdev=116.91 00:10:04.998 clat percentiles (usec): 00:10:04.998 | 1.00th=[ 322], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 515], 00:10:04.998 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 635], 00:10:04.998 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:10:04.998 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:10:04.998 | 99.99th=[ 996] 00:10:04.998 bw ( KiB/s): min= 4096, max= 4096, per=43.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.998 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.998 lat (usec) : 250=0.08%, 500=10.18%, 750=43.86%, 1000=21.49% 00:10:04.998 lat (msec) : 2=24.39% 00:10:04.998 cpu : usr=1.80%, sys=3.80%, ctx=1240, majf=0, minf=1 00:10:04.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.998 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.998 job3: (groupid=0, jobs=1): err= 0: pid=3383824: Mon Dec 9 11:24:56 2024 00:10:04.998 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:04.998 slat (nsec): min=8495, max=61626, avg=27716.93, stdev=3356.55 00:10:04.998 clat (usec): min=636, max=1251, avg=1017.23, stdev=102.16 00:10:04.998 lat (usec): min=663, max=1278, avg=1044.95, stdev=101.86 00:10:04.998 clat percentiles (usec): 00:10:04.998 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 947], 00:10:04.998 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:10:04.998 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:10:04.998 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:04.998 | 99.99th=[ 1254] 00:10:04.998 write: IOPS=720, BW=2881KiB/s (2950kB/s)(2884KiB/1001msec); 0 zone resets 00:10:04.998 slat (usec): min=9, max=1337, avg=32.42, stdev=49.92 00:10:04.998 clat (usec): min=244, max=879, avg=594.74, stdev=110.35 00:10:04.998 lat (usec): min=280, max=2028, avg=627.16, stdev=126.10 00:10:04.998 clat percentiles (usec): 00:10:04.998 | 1.00th=[ 330], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 502], 00:10:04.998 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:10:04.998 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 758], 00:10:04.998 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 881], 99.95th=[ 881], 00:10:04.998 | 99.99th=[ 881] 00:10:04.998 bw ( KiB/s): min= 4096, max= 4096, per=43.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.998 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.998 lat (usec) : 250=0.16%, 500=11.44%, 750=43.96%, 1000=21.33% 00:10:04.998 lat (msec) : 2=23.11% 00:10:04.998 cpu : usr=2.10%, sys=3.40%, ctx=1235, majf=0, minf=1 00:10:04.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.998 issued rwts: total=512,721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.998 00:10:04.998 Run status group 0 (all jobs): 00:10:04.998 READ: bw=4089KiB/s (4187kB/s), 70.4KiB/s-2046KiB/s (72.1kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1038msec 00:10:04.998 WRITE: bw=9522KiB/s (9751kB/s), 1973KiB/s-2901KiB/s (2020kB/s-2971kB/s), io=9884KiB (10.1MB), run=1001-1038msec 00:10:04.998 00:10:04.998 Disk stats (read/write): 00:10:04.998 nvme0n1: ios=63/512, merge=0/0, ticks=546/240, in_queue=786, util=86.27% 00:10:04.998 nvme0n2: ios=37/512, merge=0/0, ticks=1472/292, in_queue=1764, util=96.63% 00:10:04.998 nvme0n3: ios=501/512, merge=0/0, ticks=1392/297, in_queue=1689, util=96.40% 00:10:04.998 nvme0n4: ios=533/512, merge=0/0, ticks=710/296, in_queue=1006, util=100.00% 00:10:04.998 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:04.998 [global] 00:10:04.998 thread=1 00:10:04.998 invalidate=1 00:10:04.998 rw=randwrite 00:10:04.998 time_based=1 00:10:04.998 runtime=1 00:10:04.998 ioengine=libaio 00:10:04.998 direct=1 00:10:04.998 bs=4096 00:10:04.998 iodepth=1 00:10:04.998 norandommap=0 00:10:04.998 numjobs=1 00:10:04.998 00:10:04.998 verify_dump=1 00:10:04.998 verify_backlog=512 00:10:04.998 verify_state_save=0 00:10:04.998 do_verify=1 00:10:04.998 verify=crc32c-intel 00:10:04.998 [job0] 00:10:04.998 filename=/dev/nvme0n1 00:10:04.998 [job1] 00:10:04.998 filename=/dev/nvme0n2 00:10:04.998 [job2] 00:10:04.998 filename=/dev/nvme0n3 00:10:04.998 [job3] 00:10:04.998 filename=/dev/nvme0n4 00:10:04.998 Could not set queue depth (nvme0n1) 00:10:04.998 Could not set queue depth (nvme0n2) 00:10:04.998 Could not set queue depth (nvme0n3) 00:10:04.998 Could not set queue depth (nvme0n4) 00:10:05.260 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.260 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.260 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.260 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.260 fio-3.35 00:10:05.260 Starting 4 threads 00:10:06.651 00:10:06.651 job0: (groupid=0, jobs=1): err= 0: pid=3384347: Mon Dec 9 11:24:58 2024 00:10:06.651 read: IOPS=652, BW=2609KiB/s (2672kB/s)(2612KiB/1001msec) 00:10:06.651 slat (nsec): min=6818, max=56904, avg=24744.00, stdev=7187.98 00:10:06.651 clat (usec): min=230, max=1014, avg=712.20, stdev=116.83 00:10:06.651 lat (usec): min=238, max=1040, avg=736.95, stdev=118.28 00:10:06.651 clat percentiles (usec): 00:10:06.651 | 1.00th=[ 433], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 603], 00:10:06.651 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 758], 00:10:06.651 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:10:06.651 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1012], 00:10:06.651 | 99.99th=[ 1012] 00:10:06.651 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:06.651 slat (nsec): min=9285, max=63636, avg=28387.09, stdev=10919.48 00:10:06.651 clat (usec): min=140, max=861, avg=465.59, stdev=130.21 00:10:06.651 lat (usec): min=149, max=895, avg=493.97, stdev=135.48 00:10:06.651 clat percentiles (usec): 00:10:06.651 | 1.00th=[ 212], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 351], 00:10:06.651 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 469], 60.00th=[ 502], 00:10:06.651 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 644], 95.00th=[ 693], 00:10:06.651 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 840], 99.95th=[ 865], 00:10:06.651 | 99.99th=[ 865] 00:10:06.651 bw ( KiB/s): min= 4096, max= 4096, per=38.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.651 lat (usec) : 250=2.03%, 500=35.72%, 750=44.72%, 1000=17.47% 00:10:06.651 lat (msec) : 2=0.06% 00:10:06.651 cpu : usr=2.40%, sys=4.70%, ctx=1679, majf=0, minf=1 00:10:06.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.651 issued rwts: total=653,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.651 job1: (groupid=0, jobs=1): err= 0: pid=3384348: Mon Dec 9 11:24:58 2024 00:10:06.651 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:06.651 slat (nsec): min=7122, max=69010, avg=28945.41, stdev=4483.13 00:10:06.651 clat (usec): min=593, max=4242, avg=996.68, stdev=177.31 00:10:06.651 lat (usec): min=622, max=4270, avg=1025.63, stdev=177.29 00:10:06.651 clat percentiles (usec): 00:10:06.651 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 922], 00:10:06.651 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1029], 00:10:06.651 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:10:06.651 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 4228], 99.95th=[ 4228], 00:10:06.651 | 99.99th=[ 4228] 00:10:06.651 write: IOPS=682, BW=2729KiB/s (2795kB/s)(2732KiB/1001msec); 0 zone resets 00:10:06.651 slat (nsec): min=9372, max=64745, avg=32618.52, stdev=9342.41 00:10:06.652 clat (usec): min=223, max=1092, avg=647.84, stdev=135.83 00:10:06.652 lat (usec): min=233, max=1126, avg=680.46, stdev=138.64 00:10:06.652 clat percentiles (usec): 00:10:06.652 | 1.00th=[ 347], 5.00th=[ 429], 10.00th=[ 486], 20.00th=[ 545], 00:10:06.652 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:10:06.652 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 824], 95.00th=[ 898], 00:10:06.652 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1090], 99.95th=[ 1090], 00:10:06.652 | 99.99th=[ 1090] 00:10:06.652 bw ( KiB/s): min= 4096, max= 4096, per=38.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.652 lat (usec) : 250=0.17%, 500=6.95%, 750=40.00%, 1000=30.13% 00:10:06.652 lat (msec) : 2=22.68%, 10=0.08% 00:10:06.652 cpu : usr=2.30%, sys=5.20%, ctx=1197, majf=0, minf=1 00:10:06.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 issued rwts: total=512,683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.652 job2: (groupid=0, jobs=1): err= 0: pid=3384349: Mon Dec 9 11:24:58 2024 00:10:06.652 read: IOPS=15, BW=63.7KiB/s (65.2kB/s)(64.0KiB/1005msec) 00:10:06.652 slat (nsec): min=10473, max=28747, avg=26516.88, stdev=4296.76 00:10:06.652 clat (usec): min=40951, max=42140, avg=41513.29, stdev=478.62 00:10:06.652 lat (usec): min=40978, max=42168, avg=41539.80, stdev=478.26 00:10:06.652 clat percentiles (usec): 00:10:06.652 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:06.652 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:06.652 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:06.652 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:06.652 | 99.99th=[42206] 00:10:06.652 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:06.652 slat (nsec): min=9521, max=69687, avg=32220.16, stdev=9150.26 00:10:06.652 clat (usec): min=181, max=950, avg=622.67, stdev=128.30 00:10:06.652 lat (usec): min=193, max=985, avg=654.89, stdev=131.48 00:10:06.652 clat percentiles (usec): 00:10:06.652 | 1.00th=[ 302], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 519], 00:10:06.652 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:10:06.652 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 824], 00:10:06.652 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:10:06.652 | 99.99th=[ 955] 00:10:06.652 bw ( KiB/s): min= 4096, max= 4096, per=38.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.652 lat (usec) : 250=0.19%, 500=16.48%, 750=65.72%, 1000=14.58% 00:10:06.652 lat (msec) : 50=3.03% 00:10:06.652 cpu : usr=0.80%, sys=2.39%, ctx=530, majf=0, minf=1 00:10:06.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.652 job3: (groupid=0, jobs=1): err= 0: pid=3384350: Mon Dec 9 11:24:58 2024 00:10:06.652 read: IOPS=68, BW=274KiB/s (280kB/s)(284KiB/1037msec) 00:10:06.652 slat (nsec): min=6890, max=27500, avg=22340.11, stdev=7613.04 00:10:06.652 clat (usec): min=656, max=41933, avg=11076.91, stdev=17584.03 00:10:06.652 lat (usec): min=665, max=41960, avg=11099.25, stdev=17585.54 00:10:06.652 clat percentiles (usec): 00:10:06.652 | 1.00th=[ 660], 5.00th=[ 709], 10.00th=[ 758], 20.00th=[ 848], 00:10:06.652 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:10:06.652 | 70.00th=[ 1012], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:06.652 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:06.652 | 99.99th=[41681] 00:10:06.652 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:06.652 slat (nsec): min=5744, max=52909, avg=29139.00, stdev=9300.66 00:10:06.652 clat (usec): min=133, max=827, avg=447.08, stdev=117.91 00:10:06.652 lat (usec): min=167, max=861, avg=476.22, stdev=118.88 00:10:06.652 clat percentiles (usec): 00:10:06.652 | 1.00th=[ 204], 5.00th=[ 285], 10.00th=[ 318], 20.00th=[ 343], 00:10:06.652 | 30.00th=[ 363], 40.00th=[ 408], 50.00th=[ 445], 60.00th=[ 478], 00:10:06.652 | 70.00th=[ 498], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 652], 00:10:06.652 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 832], 00:10:06.652 | 99.99th=[ 832] 00:10:06.652 bw ( KiB/s): min= 4096, max= 4096, per=38.88%, avg=4096.00, stdev= 0.00, samples=1 00:10:06.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:06.652 lat (usec) : 250=2.57%, 500=59.01%, 750=26.24%, 1000=7.72% 00:10:06.652 lat (msec) : 2=1.37%, 50=3.09% 00:10:06.652 cpu : usr=0.97%, sys=1.35%, ctx=587, majf=0, minf=1 00:10:06.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.652 issued rwts: total=71,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.652 00:10:06.652 Run status group 0 (all jobs): 00:10:06.652 READ: bw=4829KiB/s (4945kB/s), 63.7KiB/s-2609KiB/s (65.2kB/s-2672kB/s), io=5008KiB (5128kB), run=1001-1037msec 00:10:06.652 WRITE: bw=10.3MiB/s (10.8MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.7MiB (11.2MB), run=1001-1037msec 00:10:06.652 00:10:06.652 Disk stats (read/write): 00:10:06.652 nvme0n1: ios=529/904, merge=0/0, ticks=652/388, in_queue=1040, util=99.60% 00:10:06.652 nvme0n2: ios=508/512, merge=0/0, ticks=668/271, in_queue=939, util=96.94% 00:10:06.652 nvme0n3: ios=33/512, merge=0/0, ticks=1380/250, in_queue=1630, util=96.94% 00:10:06.652 nvme0n4: ios=81/512, merge=0/0, ticks=1496/218, in_queue=1714, util=96.90% 00:10:06.652 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:06.652 [global] 00:10:06.652 thread=1 00:10:06.652 invalidate=1 00:10:06.652 rw=write 00:10:06.652 time_based=1 00:10:06.652 runtime=1 00:10:06.652 ioengine=libaio 00:10:06.652 direct=1 00:10:06.652 bs=4096 00:10:06.652 iodepth=128 00:10:06.652 norandommap=0 00:10:06.652 numjobs=1 00:10:06.652 00:10:06.652 verify_dump=1 00:10:06.652 verify_backlog=512 00:10:06.652 verify_state_save=0 00:10:06.652 do_verify=1 00:10:06.652 verify=crc32c-intel 00:10:06.652 [job0] 00:10:06.652 filename=/dev/nvme0n1 00:10:06.652 [job1] 00:10:06.652 filename=/dev/nvme0n2 00:10:06.652 [job2] 00:10:06.652 filename=/dev/nvme0n3 00:10:06.652 [job3] 00:10:06.652 filename=/dev/nvme0n4 00:10:06.652 Could not set queue depth (nvme0n1) 00:10:06.652 Could not set queue depth (nvme0n2) 00:10:06.652 Could not set queue depth (nvme0n3) 00:10:06.652 Could not set queue depth (nvme0n4) 00:10:06.915 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.915 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.915 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.915 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.915 fio-3.35 00:10:06.915 Starting 4 threads 00:10:08.302 00:10:08.302 job0: (groupid=0, jobs=1): err= 0: pid=3384868: Mon Dec 9 11:25:00 2024 00:10:08.302 read: IOPS=3737, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1003msec) 00:10:08.302 slat (nsec): min=877, max=9908.3k, avg=126701.94, stdev=748971.10 00:10:08.302 clat (usec): min=1622, max=28269, avg=15348.28, stdev=3059.07 00:10:08.302 lat (usec): min=3678, max=28280, avg=15474.99, stdev=3124.58 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 9110], 5.00th=[11338], 10.00th=[12518], 20.00th=[12911], 00:10:08.302 | 30.00th=[13173], 40.00th=[14484], 50.00th=[15139], 60.00th=[15533], 00:10:08.302 | 70.00th=[16450], 80.00th=[18220], 90.00th=[19006], 95.00th=[21103], 00:10:08.302 | 99.00th=[23200], 99.50th=[25297], 99.90th=[26084], 99.95th=[27919], 00:10:08.302 | 99.99th=[28181] 00:10:08.302 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:08.302 slat (nsec): min=1558, max=11973k, avg=123404.69, stdev=539394.05 00:10:08.302 clat (usec): min=4530, max=28008, avg=16946.78, stdev=4780.54 00:10:08.302 lat (usec): min=4539, max=28016, avg=17070.18, stdev=4819.12 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 6128], 5.00th=[ 7242], 10.00th=[11469], 20.00th=[12125], 00:10:08.302 | 30.00th=[12649], 40.00th=[17433], 50.00th=[18482], 60.00th=[18482], 00:10:08.302 | 70.00th=[19530], 80.00th=[20841], 90.00th=[22414], 95.00th=[24511], 00:10:08.302 | 99.00th=[26870], 99.50th=[27395], 99.90th=[27919], 99.95th=[27919], 00:10:08.302 | 99.99th=[27919] 00:10:08.302 bw ( KiB/s): min=16384, max=16384, per=18.47%, avg=16384.00, stdev= 0.00, samples=2 00:10:08.302 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:08.302 lat (msec) : 2=0.01%, 4=0.24%, 10=4.19%, 20=77.91%, 50=17.64% 00:10:08.302 cpu : usr=2.50%, sys=3.89%, ctx=500, majf=0, minf=1 00:10:08.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:08.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.302 issued rwts: total=3749,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.302 job1: (groupid=0, jobs=1): err= 0: pid=3384870: Mon Dec 9 11:25:00 2024 00:10:08.302 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:10:08.302 slat (nsec): min=881, max=12087k, avg=172059.13, stdev=994041.47 00:10:08.302 clat (usec): min=1499, max=47611, avg=20805.32, stdev=6818.87 00:10:08.302 lat (usec): min=8864, max=47635, avg=20977.38, stdev=6895.79 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 9110], 5.00th=[14222], 10.00th=[14484], 20.00th=[15270], 00:10:08.302 | 30.00th=[15664], 40.00th=[16909], 50.00th=[19006], 60.00th=[20579], 00:10:08.302 | 70.00th=[22676], 80.00th=[26084], 90.00th=[32113], 95.00th=[35914], 00:10:08.302 | 99.00th=[37487], 99.50th=[37487], 99.90th=[43779], 99.95th=[45351], 00:10:08.302 | 99.99th=[47449] 00:10:08.302 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:08.302 slat (nsec): min=1575, max=13196k, avg=161629.73, stdev=740176.82 00:10:08.302 clat (usec): min=12847, max=42302, avg=21647.88, stdev=5576.02 00:10:08.302 lat (usec): min=12851, max=42335, avg=21809.51, stdev=5624.60 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[13960], 5.00th=[16057], 10.00th=[16909], 20.00th=[17957], 00:10:08.302 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19530], 00:10:08.302 | 70.00th=[23462], 80.00th=[27395], 90.00th=[31851], 95.00th=[33162], 00:10:08.302 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[40109], 00:10:08.302 | 99.99th=[42206] 00:10:08.302 bw ( KiB/s): min=12288, max=12288, per=13.85%, avg=12288.00, stdev= 0.00, samples=2 00:10:08.302 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:08.302 lat (msec) : 2=0.02%, 10=0.71%, 20=58.43%, 50=40.84% 00:10:08.302 cpu : usr=2.30%, sys=3.19%, ctx=388, majf=0, minf=1 00:10:08.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:08.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.302 issued rwts: total=2836,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.302 job2: (groupid=0, jobs=1): err= 0: pid=3384878: Mon Dec 9 11:25:00 2024 00:10:08.302 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:10:08.302 slat (nsec): min=916, max=3132.3k, avg=76951.88, stdev=354933.23 00:10:08.302 clat (usec): min=3172, max=12175, avg=9847.58, stdev=961.39 00:10:08.302 lat (usec): min=3174, max=12358, avg=9924.53, stdev=909.84 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 6849], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9241], 00:10:08.302 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:10:08.302 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:10:08.302 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12125], 99.95th=[12125], 00:10:08.302 | 99.99th=[12125] 00:10:08.302 write: IOPS=6655, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1002msec); 0 zone resets 00:10:08.302 slat (nsec): min=1561, max=10290k, avg=70612.09, stdev=359587.43 00:10:08.302 clat (usec): min=857, max=20081, avg=9227.79, stdev=1211.37 00:10:08.302 lat (usec): min=1223, max=20091, avg=9298.40, stdev=1175.39 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 5145], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8225], 00:10:08.302 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:10:08.302 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:10:08.302 | 99.00th=[12911], 99.50th=[13435], 99.90th=[16188], 99.95th=[16188], 00:10:08.302 | 99.99th=[20055] 00:10:08.302 bw ( KiB/s): min=26208, max=27040, per=30.02%, avg=26624.00, stdev=588.31, samples=2 00:10:08.302 iops : min= 6552, max= 6760, avg=6656.00, stdev=147.08, samples=2 00:10:08.302 lat (usec) : 1000=0.01% 00:10:08.302 lat (msec) : 2=0.11%, 4=0.50%, 10=73.68%, 20=25.70%, 50=0.01% 00:10:08.302 cpu : usr=2.60%, sys=3.60%, ctx=767, majf=0, minf=1 00:10:08.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:08.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.302 issued rwts: total=6656,6669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.302 job3: (groupid=0, jobs=1): err= 0: pid=3384879: Mon Dec 9 11:25:00 2024 00:10:08.302 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec) 00:10:08.302 slat (nsec): min=944, max=7214.7k, avg=63547.59, stdev=466701.15 00:10:08.302 clat (usec): min=2828, max=15720, avg=8372.07, stdev=1924.77 00:10:08.302 lat (usec): min=2833, max=15729, avg=8435.62, stdev=1949.35 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 4047], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 7111], 00:10:08.302 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8160], 00:10:08.302 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[12387], 00:10:08.302 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15664], 99.95th=[15664], 00:10:08.302 | 99.99th=[15664] 00:10:08.302 write: IOPS=8404, BW=32.8MiB/s (34.4MB/s)(33.0MiB/1005msec); 0 zone resets 00:10:08.302 slat (nsec): min=1601, max=6209.0k, avg=51911.25, stdev=345433.03 00:10:08.302 clat (usec): min=1043, max=14459, avg=6980.08, stdev=1667.28 00:10:08.302 lat (usec): min=1156, max=14485, avg=7031.99, stdev=1688.77 00:10:08.302 clat percentiles (usec): 00:10:08.302 | 1.00th=[ 2671], 5.00th=[ 4113], 10.00th=[ 4686], 20.00th=[ 5080], 00:10:08.302 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7635], 00:10:08.302 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 9896], 00:10:08.302 | 99.00th=[10683], 99.50th=[10945], 99.90th=[14222], 99.95th=[14484], 00:10:08.302 | 99.99th=[14484] 00:10:08.302 bw ( KiB/s): min=33096, max=33456, per=37.52%, avg=33276.00, stdev=254.56, samples=2 00:10:08.302 iops : min= 8274, max= 8364, avg=8319.00, stdev=63.64, samples=2 00:10:08.302 lat (msec) : 2=0.22%, 4=2.64%, 10=85.90%, 20=11.23% 00:10:08.302 cpu : usr=5.18%, sys=8.96%, ctx=700, majf=0, minf=2 00:10:08.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:08.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.302 issued rwts: total=8192,8447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.302 00:10:08.302 Run status group 0 (all jobs): 00:10:08.302 READ: bw=83.3MiB/s (87.4MB/s), 11.0MiB/s-31.8MiB/s (11.6MB/s-33.4MB/s), io=83.7MiB (87.8MB), run=1002-1005msec 00:10:08.302 WRITE: bw=86.6MiB/s (90.8MB/s), 12.0MiB/s-32.8MiB/s (12.5MB/s-34.4MB/s), io=87.0MiB (91.3MB), run=1002-1005msec 00:10:08.302 00:10:08.303 Disk stats (read/write): 00:10:08.303 nvme0n1: ios=3122/3447, merge=0/0, ticks=22982/28088, in_queue=51070, util=95.59% 00:10:08.303 nvme0n2: ios=2399/2560, merge=0/0, ticks=15981/18183, in_queue=34164, util=87.24% 00:10:08.303 nvme0n3: ios=5520/5632, merge=0/0, ticks=13692/13701, in_queue=27393, util=88.37% 00:10:08.303 nvme0n4: ios=6656/7119, merge=0/0, ticks=52954/48026, in_queue=100980, util=89.51% 00:10:08.303 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:08.303 [global] 00:10:08.303 thread=1 00:10:08.303 invalidate=1 00:10:08.303 rw=randwrite 00:10:08.303 time_based=1 00:10:08.303 runtime=1 00:10:08.303 ioengine=libaio 00:10:08.303 direct=1 00:10:08.303 bs=4096 00:10:08.303 iodepth=128 00:10:08.303 norandommap=0 00:10:08.303 numjobs=1 00:10:08.303 00:10:08.303 verify_dump=1 00:10:08.303 verify_backlog=512 00:10:08.303 verify_state_save=0 00:10:08.303 do_verify=1 00:10:08.303 verify=crc32c-intel 00:10:08.303 [job0] 00:10:08.303 filename=/dev/nvme0n1 00:10:08.303 [job1] 00:10:08.303 filename=/dev/nvme0n2 00:10:08.303 [job2] 00:10:08.303 filename=/dev/nvme0n3 00:10:08.303 [job3] 00:10:08.303 filename=/dev/nvme0n4 00:10:08.303 Could not set queue depth (nvme0n1) 00:10:08.303 Could not set queue depth (nvme0n2) 00:10:08.303 Could not set queue depth (nvme0n3) 00:10:08.303 Could not set queue depth (nvme0n4) 00:10:08.565 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.565 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.565 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.565 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.565 fio-3.35 00:10:08.565 Starting 4 threads 00:10:09.953 00:10:09.953 job0: (groupid=0, jobs=1): err= 0: pid=3385396: Mon Dec 9 11:25:01 2024 00:10:09.953 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:10:09.953 slat (nsec): min=902, max=8515.0k, avg=67040.41, stdev=436621.62 00:10:09.953 clat (usec): min=2583, max=22540, avg=8681.81, stdev=2443.51 00:10:09.953 lat (usec): min=2586, max=22544, avg=8748.85, stdev=2474.44 00:10:09.953 clat percentiles (usec): 00:10:09.953 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7046], 00:10:09.953 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8455], 00:10:09.953 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11994], 95.00th=[13566], 00:10:09.953 | 99.00th=[16319], 99.50th=[18744], 99.90th=[20055], 99.95th=[22414], 00:10:09.953 | 99.99th=[22414] 00:10:09.953 write: IOPS=7497, BW=29.3MiB/s (30.7MB/s)(29.4MiB/1003msec); 0 zone resets 00:10:09.953 slat (nsec): min=1562, max=9528.0k, avg=61211.48, stdev=333633.75 00:10:09.953 clat (usec): min=562, max=28191, avg=8645.51, stdev=3835.12 00:10:09.953 lat (usec): min=568, max=28195, avg=8706.72, stdev=3850.12 00:10:09.953 clat percentiles (usec): 00:10:09.953 | 1.00th=[ 1909], 5.00th=[ 4817], 10.00th=[ 5932], 20.00th=[ 6652], 00:10:09.953 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7898], 00:10:09.953 | 70.00th=[ 8291], 80.00th=[ 9634], 90.00th=[14615], 95.00th=[17171], 00:10:09.953 | 99.00th=[21365], 99.50th=[22152], 99.90th=[24249], 99.95th=[25035], 00:10:09.953 | 99.99th=[28181] 00:10:09.953 bw ( KiB/s): min=26552, max=32584, per=32.01%, avg=29568.00, stdev=4265.27, samples=2 00:10:09.953 iops : min= 6638, max= 8146, avg=7392.00, stdev=1066.32, samples=2 00:10:09.953 lat (usec) : 750=0.07%, 1000=0.20% 00:10:09.953 lat (msec) : 2=0.48%, 4=1.88%, 10=76.12%, 20=19.79%, 50=1.45% 00:10:09.953 cpu : usr=3.89%, sys=7.19%, ctx=823, majf=0, minf=1 00:10:09.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:09.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.953 issued rwts: total=7168,7520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.953 job1: (groupid=0, jobs=1): err= 0: pid=3385397: Mon Dec 9 11:25:01 2024 00:10:09.953 read: IOPS=5979, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1003msec) 00:10:09.953 slat (nsec): min=878, max=11174k, avg=82441.70, stdev=541620.30 00:10:09.953 clat (usec): min=1292, max=29212, avg=10738.92, stdev=3854.52 00:10:09.953 lat (usec): min=3015, max=35487, avg=10821.36, stdev=3868.15 00:10:09.953 clat percentiles (usec): 00:10:09.953 | 1.00th=[ 4424], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8586], 00:10:09.954 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10421], 00:10:09.954 | 70.00th=[10945], 80.00th=[11863], 90.00th=[15008], 95.00th=[18744], 00:10:09.954 | 99.00th=[26346], 99.50th=[28443], 99.90th=[28705], 99.95th=[29230], 00:10:09.954 | 99.99th=[29230] 00:10:09.954 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:09.954 slat (nsec): min=1483, max=12764k, avg=74597.78, stdev=496700.50 00:10:09.954 clat (usec): min=1231, max=47630, avg=10216.71, stdev=6243.06 00:10:09.954 lat (usec): min=1242, max=47642, avg=10291.31, stdev=6278.96 00:10:09.954 clat percentiles (usec): 00:10:09.954 | 1.00th=[ 3949], 5.00th=[ 6063], 10.00th=[ 7308], 20.00th=[ 7767], 00:10:09.954 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:10:09.954 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11994], 95.00th=[14877], 00:10:09.954 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:10:09.954 | 99.99th=[47449] 00:10:09.954 bw ( KiB/s): min=24576, max=24576, per=26.61%, avg=24576.00, stdev= 0.00, samples=2 00:10:09.954 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:09.954 lat (msec) : 2=0.05%, 4=0.99%, 10=63.57%, 20=31.59%, 50=3.81% 00:10:09.954 cpu : usr=3.39%, sys=5.69%, ctx=642, majf=0, minf=1 00:10:09.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:09.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.954 issued rwts: total=5997,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.954 job2: (groupid=0, jobs=1): err= 0: pid=3385398: Mon Dec 9 11:25:01 2024 00:10:09.954 read: IOPS=5039, BW=19.7MiB/s (20.6MB/s)(20.0MiB/1016msec) 00:10:09.954 slat (nsec): min=925, max=15584k, avg=107320.96, stdev=812160.89 00:10:09.954 clat (usec): min=1972, max=68647, avg=13644.62, stdev=8189.95 00:10:09.954 lat (usec): min=1982, max=68656, avg=13751.94, stdev=8241.28 00:10:09.954 clat percentiles (usec): 00:10:09.954 | 1.00th=[ 3589], 5.00th=[ 7046], 10.00th=[ 7570], 20.00th=[ 8979], 00:10:09.954 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11207], 60.00th=[11994], 00:10:09.954 | 70.00th=[13698], 80.00th=[15139], 90.00th=[24773], 95.00th=[30802], 00:10:09.954 | 99.00th=[45351], 99.50th=[61604], 99.90th=[67634], 99.95th=[68682], 00:10:09.954 | 99.99th=[68682] 00:10:09.954 write: IOPS=5249, BW=20.5MiB/s (21.5MB/s)(20.8MiB/1016msec); 0 zone resets 00:10:09.954 slat (nsec): min=1529, max=9543.7k, avg=70146.19, stdev=491668.85 00:10:09.954 clat (usec): min=1041, max=64065, avg=11072.95, stdev=6124.90 00:10:09.954 lat (usec): min=1231, max=64069, avg=11143.10, stdev=6147.62 00:10:09.954 clat percentiles (usec): 00:10:09.954 | 1.00th=[ 2999], 5.00th=[ 4752], 10.00th=[ 5997], 20.00th=[ 7570], 00:10:09.954 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:10:09.954 | 70.00th=[11469], 80.00th=[11994], 90.00th=[16319], 95.00th=[21103], 00:10:09.954 | 99.00th=[36439], 99.50th=[55313], 99.90th=[59507], 99.95th=[64226], 00:10:09.954 | 99.99th=[64226] 00:10:09.954 bw ( KiB/s): min=17072, max=24576, per=22.55%, avg=20824.00, stdev=5306.13, samples=2 00:10:09.954 iops : min= 4268, max= 6144, avg=5206.00, stdev=1326.53, samples=2 00:10:09.954 lat (msec) : 2=0.19%, 4=1.47%, 10=37.33%, 20=50.88%, 50=9.38% 00:10:09.954 lat (msec) : 100=0.75% 00:10:09.954 cpu : usr=3.94%, sys=5.71%, ctx=409, majf=0, minf=3 00:10:09.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:09.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.954 issued rwts: total=5120,5333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.954 job3: (groupid=0, jobs=1): err= 0: pid=3385399: Mon Dec 9 11:25:01 2024 00:10:09.954 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1016msec) 00:10:09.954 slat (nsec): min=944, max=12627k, avg=106402.24, stdev=785189.36 00:10:09.954 clat (usec): min=5792, max=40781, avg=13393.80, stdev=7338.23 00:10:09.954 lat (usec): min=6050, max=40794, avg=13500.20, stdev=7401.58 00:10:09.954 clat percentiles (usec): 00:10:09.954 | 1.00th=[ 6849], 5.00th=[ 7767], 10.00th=[ 7832], 20.00th=[ 9241], 00:10:09.954 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10683], 60.00th=[11600], 00:10:09.954 | 70.00th=[12649], 80.00th=[16450], 90.00th=[23987], 95.00th=[31851], 00:10:09.954 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:10:09.954 | 99.99th=[40633] 00:10:09.954 write: IOPS=4391, BW=17.2MiB/s (18.0MB/s)(17.4MiB/1016msec); 0 zone resets 00:10:09.954 slat (nsec): min=1617, max=19927k, avg=120656.33, stdev=841628.54 00:10:09.954 clat (usec): min=3541, max=93627, avg=16479.52, stdev=13345.46 00:10:09.954 lat (usec): min=3549, max=93653, avg=16600.18, stdev=13424.27 00:10:09.954 clat percentiles (usec): 00:10:09.954 | 1.00th=[ 5211], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8094], 00:10:09.954 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[14484], 00:10:09.954 | 70.00th=[19006], 80.00th=[22152], 90.00th=[29492], 95.00th=[37487], 00:10:09.954 | 99.00th=[85459], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:10:09.954 | 99.99th=[93848] 00:10:09.954 bw ( KiB/s): min=17160, max=17520, per=18.77%, avg=17340.00, stdev=254.56, samples=2 00:10:09.954 iops : min= 4290, max= 4380, avg=4335.00, stdev=63.64, samples=2 00:10:09.954 lat (msec) : 4=0.13%, 10=42.25%, 20=35.18%, 50=20.86%, 100=1.58% 00:10:09.954 cpu : usr=3.25%, sys=4.63%, ctx=301, majf=0, minf=1 00:10:09.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:09.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.954 issued rwts: total=4096,4462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.954 00:10:09.954 Run status group 0 (all jobs): 00:10:09.954 READ: bw=86.0MiB/s (90.2MB/s), 15.7MiB/s-27.9MiB/s (16.5MB/s-29.3MB/s), io=87.4MiB (91.7MB), run=1003-1016msec 00:10:09.954 WRITE: bw=90.2MiB/s (94.6MB/s), 17.2MiB/s-29.3MiB/s (18.0MB/s-30.7MB/s), io=91.6MiB (96.1MB), run=1003-1016msec 00:10:09.954 00:10:09.954 Disk stats (read/write): 00:10:09.954 nvme0n1: ios=5975/6144, merge=0/0, ticks=32998/35516, in_queue=68514, util=87.78% 00:10:09.954 nvme0n2: ios=4832/5120, merge=0/0, ticks=35493/33872, in_queue=69365, util=86.85% 00:10:09.954 nvme0n3: ios=4503/4608, merge=0/0, ticks=42866/33970, in_queue=76836, util=91.24% 00:10:09.954 nvme0n4: ios=3618/3719, merge=0/0, ticks=24613/25769, in_queue=50382, util=96.15% 00:10:09.954 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:09.954 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3385729 00:10:09.954 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:09.954 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:09.954 [global] 00:10:09.954 thread=1 00:10:09.954 invalidate=1 00:10:09.954 rw=read 00:10:09.954 time_based=1 00:10:09.954 runtime=10 00:10:09.954 ioengine=libaio 00:10:09.954 direct=1 00:10:09.954 bs=4096 00:10:09.954 iodepth=1 00:10:09.954 norandommap=1 00:10:09.954 numjobs=1 00:10:09.954 00:10:09.954 [job0] 00:10:09.954 filename=/dev/nvme0n1 00:10:09.954 [job1] 00:10:09.954 filename=/dev/nvme0n2 00:10:09.954 [job2] 00:10:09.954 filename=/dev/nvme0n3 00:10:09.954 [job3] 00:10:09.954 filename=/dev/nvme0n4 00:10:09.954 Could not set queue depth (nvme0n1) 00:10:09.954 Could not set queue depth (nvme0n2) 00:10:09.954 Could not set queue depth (nvme0n3) 00:10:09.954 Could not set queue depth (nvme0n4) 00:10:10.216 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.216 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.216 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.216 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.216 fio-3.35 00:10:10.216 Starting 4 threads 00:10:13.517 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:13.517 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10321920, buflen=4096 00:10:13.517 fio: pid=3385929, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:13.517 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=270336, buflen=4096 00:10:13.517 fio: pid=3385928, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:13.517 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=286720, buflen=4096 00:10:13.517 fio: pid=3385925, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.517 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=299008, buflen=4096 00:10:13.517 fio: pid=3385927, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.517 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:13.517 00:10:13.517 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385925: Mon Dec 9 11:25:05 2024 00:10:13.517 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(280KiB/2911msec) 00:10:13.517 slat (usec): min=8, max=8571, avg=253.15, stdev=1344.50 00:10:13.517 clat (usec): min=648, max=42214, avg=41012.75, stdev=4914.97 00:10:13.517 lat (usec): min=688, max=49970, avg=41269.15, stdev=5114.25 00:10:13.517 clat percentiles (usec): 00:10:13.517 | 1.00th=[ 652], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:13.517 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:13.517 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:13.517 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.517 | 99.99th=[42206] 00:10:13.517 bw ( KiB/s): min= 96, max= 104, per=2.72%, avg=97.60, stdev= 3.58, samples=5 00:10:13.517 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:13.517 lat (usec) : 750=1.41% 00:10:13.517 lat (msec) : 50=97.18% 00:10:13.517 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:13.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.517 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.517 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.517 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385927: Mon Dec 9 11:25:05 2024 00:10:13.517 read: IOPS=24, BW=95.4KiB/s (97.7kB/s)(292KiB/3061msec) 00:10:13.517 slat (usec): min=25, max=21609, avg=495.05, stdev=2711.48 00:10:13.517 clat (usec): min=1139, max=43105, avg=41287.96, stdev=4777.60 00:10:13.517 lat (usec): min=1198, max=62996, avg=41789.40, stdev=5508.60 00:10:13.517 clat percentiles (usec): 00:10:13.517 | 1.00th=[ 1139], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:13.517 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:13.517 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:13.517 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:13.517 | 99.99th=[43254] 00:10:13.517 bw ( KiB/s): min= 89, max= 96, per=2.64%, avg=94.83, stdev= 2.86, samples=6 00:10:13.517 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:10:13.517 lat (msec) : 2=1.35%, 50=97.30% 00:10:13.518 cpu : usr=0.16%, sys=0.00%, ctx=78, majf=0, minf=2 00:10:13.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.518 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385928: Mon Dec 9 11:25:05 2024 00:10:13.518 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(264KiB/2748msec) 00:10:13.518 slat (usec): min=25, max=215, avg=29.18, stdev=23.70 00:10:13.518 clat (usec): min=968, max=42079, avg=41272.49, stdev=5043.98 00:10:13.518 lat (usec): min=1037, max=42105, avg=41301.72, stdev=5038.94 00:10:13.518 clat percentiles (usec): 00:10:13.518 | 1.00th=[ 971], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:13.518 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:13.518 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:13.518 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.518 | 99.99th=[42206] 00:10:13.518 bw ( KiB/s): min= 96, max= 96, per=2.69%, avg=96.00, stdev= 0.00, samples=5 00:10:13.518 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:13.518 lat (usec) : 1000=1.49% 00:10:13.518 lat (msec) : 50=97.01% 00:10:13.518 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=1 00:10:13.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.518 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3385929: Mon Dec 9 11:25:05 2024 00:10:13.518 read: IOPS=994, BW=3978KiB/s (4073kB/s)(9.84MiB/2534msec) 00:10:13.518 slat (nsec): min=6651, max=64319, avg=27235.86, stdev=2044.23 00:10:13.518 clat (usec): min=488, max=1208, avg=963.01, stdev=53.79 00:10:13.518 lat (usec): min=515, max=1235, avg=990.25, stdev=53.93 00:10:13.518 clat percentiles (usec): 00:10:13.518 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 938], 00:10:13.518 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 971], 00:10:13.518 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1037], 00:10:13.518 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:13.518 | 99.99th=[ 1205] 00:10:13.518 bw ( KiB/s): min= 4000, max= 4032, per=100.00%, avg=4017.60, stdev=14.31, samples=5 00:10:13.518 iops : min= 1000, max= 1008, avg=1004.40, stdev= 3.58, samples=5 00:10:13.518 lat (usec) : 500=0.04%, 750=0.44%, 1000=80.64% 00:10:13.518 lat (msec) : 2=18.84% 00:10:13.518 cpu : usr=3.12%, sys=2.76%, ctx=2521, majf=0, minf=2 00:10:13.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.518 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.518 00:10:13.518 Run status group 0 (all jobs): 00:10:13.518 READ: bw=3566KiB/s (3652kB/s), 95.4KiB/s-3978KiB/s (97.7kB/s-4073kB/s), io=10.7MiB (11.2MB), run=2534-3061msec 00:10:13.518 00:10:13.518 Disk stats (read/write): 00:10:13.518 nvme0n1: ios=67/0, merge=0/0, ticks=2748/0, in_queue=2748, util=92.62% 00:10:13.518 nvme0n2: ios=72/0, merge=0/0, ticks=2974/0, in_queue=2974, util=93.50% 00:10:13.518 nvme0n3: ios=61/0, merge=0/0, ticks=2518/0, in_queue=2518, util=95.59% 00:10:13.518 nvme0n4: ios=2297/0, merge=0/0, ticks=2171/0, in_queue=2171, util=95.93% 00:10:13.779 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.779 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:14.041 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.041 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:14.308 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.308 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:14.308 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.308 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3385729 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:14.569 nvmf hotplug test: fio failed as expected 00:10:14.569 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.830 rmmod nvme_tcp 00:10:14.830 rmmod nvme_fabrics 00:10:14.830 rmmod nvme_keyring 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3382119 ']' 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3382119 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3382119 ']' 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3382119 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.830 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382119 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382119' 00:10:15.092 killing process with pid 3382119 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3382119 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3382119 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.092 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.093 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.093 11:25:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.642 00:10:17.642 real 0m29.100s 00:10:17.642 user 2m39.045s 00:10:17.642 sys 0m9.243s 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.642 ************************************ 00:10:17.642 END TEST nvmf_fio_target 00:10:17.642 ************************************ 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.642 ************************************ 00:10:17.642 START TEST nvmf_bdevio 00:10:17.642 ************************************ 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.642 * Looking for test storage... 00:10:17.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.642 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.642 --rc genhtml_branch_coverage=1 00:10:17.642 --rc genhtml_function_coverage=1 00:10:17.642 --rc genhtml_legend=1 00:10:17.643 --rc geninfo_all_blocks=1 00:10:17.643 --rc geninfo_unexecuted_blocks=1 00:10:17.643 00:10:17.643 ' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.643 --rc genhtml_branch_coverage=1 00:10:17.643 --rc genhtml_function_coverage=1 00:10:17.643 --rc genhtml_legend=1 00:10:17.643 --rc geninfo_all_blocks=1 00:10:17.643 --rc geninfo_unexecuted_blocks=1 00:10:17.643 00:10:17.643 ' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.643 --rc genhtml_branch_coverage=1 00:10:17.643 --rc genhtml_function_coverage=1 00:10:17.643 --rc genhtml_legend=1 00:10:17.643 --rc geninfo_all_blocks=1 00:10:17.643 --rc geninfo_unexecuted_blocks=1 00:10:17.643 00:10:17.643 ' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.643 --rc genhtml_branch_coverage=1 00:10:17.643 --rc genhtml_function_coverage=1 00:10:17.643 --rc genhtml_legend=1 00:10:17.643 --rc geninfo_all_blocks=1 00:10:17.643 --rc geninfo_unexecuted_blocks=1 00:10:17.643 00:10:17.643 ' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.643 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:25.792 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:25.793 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:25.793 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:25.793 Found net devices under 0000:31:00.0: cvl_0_0 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:25.793 Found net devices under 0000:31:00.1: cvl_0_1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:10:25.793 00:10:25.793 --- 10.0.0.2 ping statistics --- 00:10:25.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.793 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:10:25.793 00:10:25.793 --- 10.0.0.1 ping statistics --- 00:10:25.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.793 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3391211 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3391211 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:25.793 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3391211 ']' 00:10:25.794 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.794 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.794 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.794 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.794 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.794 [2024-12-09 11:25:17.031710] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:10:25.794 [2024-12-09 11:25:17.031778] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.794 [2024-12-09 11:25:17.133875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.794 [2024-12-09 11:25:17.184759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.794 [2024-12-09 11:25:17.184809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.794 [2024-12-09 11:25:17.184818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.794 [2024-12-09 11:25:17.184825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.794 [2024-12-09 11:25:17.184832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.794 [2024-12-09 11:25:17.186856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:25.794 [2024-12-09 11:25:17.187034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:25.794 [2024-12-09 11:25:17.187155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:25.794 [2024-12-09 11:25:17.187341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.794 [2024-12-09 11:25:17.907702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.794 Malloc0 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.794 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.055 [2024-12-09 11:25:17.981963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:26.055 { 00:10:26.055 "params": { 00:10:26.055 "name": "Nvme$subsystem", 00:10:26.055 "trtype": "$TEST_TRANSPORT", 00:10:26.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.055 "adrfam": "ipv4", 00:10:26.055 "trsvcid": "$NVMF_PORT", 00:10:26.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.055 "hdgst": ${hdgst:-false}, 00:10:26.055 "ddgst": ${ddgst:-false} 00:10:26.055 }, 00:10:26.055 "method": "bdev_nvme_attach_controller" 00:10:26.055 } 00:10:26.055 EOF 00:10:26.055 )") 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:26.055 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:26.055 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:26.055 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:26.055 "params": { 00:10:26.055 "name": "Nvme1", 00:10:26.055 "trtype": "tcp", 00:10:26.055 "traddr": "10.0.0.2", 00:10:26.055 "adrfam": "ipv4", 00:10:26.055 "trsvcid": "4420", 00:10:26.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:26.055 "hdgst": false, 00:10:26.055 "ddgst": false 00:10:26.055 }, 00:10:26.055 "method": "bdev_nvme_attach_controller" 00:10:26.055 }' 00:10:26.055 [2024-12-09 11:25:18.039138] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:10:26.055 [2024-12-09 11:25:18.039206] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391380 ] 00:10:26.055 [2024-12-09 11:25:18.117818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.055 [2024-12-09 11:25:18.162484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.055 [2024-12-09 11:25:18.162604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.055 [2024-12-09 11:25:18.162607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.316 I/O targets: 00:10:26.316 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:26.316 00:10:26.316 00:10:26.316 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.316 http://cunit.sourceforge.net/ 00:10:26.316 00:10:26.316 00:10:26.316 Suite: bdevio tests on: Nvme1n1 00:10:26.316 Test: blockdev write read block ...passed 00:10:26.316 Test: blockdev write zeroes read block ...passed 00:10:26.316 Test: blockdev write zeroes read no split ...passed 00:10:26.316 Test: blockdev write zeroes read split ...passed 00:10:26.316 Test: blockdev write zeroes read split partial ...passed 00:10:26.316 Test: blockdev reset ...[2024-12-09 11:25:18.470369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:26.316 [2024-12-09 11:25:18.470435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed60e0 (9): Bad file descriptor 00:10:26.577 [2024-12-09 11:25:18.521796] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:26.577 passed 00:10:26.577 Test: blockdev write read 8 blocks ...passed 00:10:26.577 Test: blockdev write read size > 128k ...passed 00:10:26.577 Test: blockdev write read invalid size ...passed 00:10:26.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.577 Test: blockdev write read max offset ...passed 00:10:26.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.577 Test: blockdev writev readv 8 blocks ...passed 00:10:26.577 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.838 Test: blockdev writev readv block ...passed 00:10:26.838 Test: blockdev writev readv size > 128k ...passed 00:10:26.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.838 Test: blockdev comparev and writev ...[2024-12-09 11:25:18.789977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.790001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.790016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.790022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.790493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.790502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.790518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.790976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.790985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.790995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.791001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.791459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.791468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.791478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.838 [2024-12-09 11:25:18.791483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:26.838 passed 00:10:26.838 Test: blockdev nvme passthru rw ...passed 00:10:26.838 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:25:18.876902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.838 [2024-12-09 11:25:18.876913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:26.838 [2024-12-09 11:25:18.877268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.839 [2024-12-09 11:25:18.877277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:26.839 [2024-12-09 11:25:18.877648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.839 [2024-12-09 11:25:18.877656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:26.839 [2024-12-09 11:25:18.877966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.839 [2024-12-09 11:25:18.877974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:26.839 passed 00:10:26.839 Test: blockdev nvme admin passthru ...passed 00:10:26.839 Test: blockdev copy ...passed 00:10:26.839 00:10:26.839 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.839 suites 1 1 n/a 0 0 00:10:26.839 tests 23 23 23 0 0 00:10:26.839 asserts 152 152 152 0 n/a 00:10:26.839 00:10:26.839 Elapsed time = 1.170 seconds 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.100 rmmod nvme_tcp 00:10:27.100 rmmod nvme_fabrics 00:10:27.100 rmmod nvme_keyring 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3391211 ']' 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3391211 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3391211 ']' 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3391211 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391211 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391211' 00:10:27.100 killing process with pid 3391211 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3391211 00:10:27.100 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3391211 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.361 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.908 00:10:29.908 real 0m12.157s 00:10:29.908 user 0m12.918s 00:10:29.908 sys 0m6.161s 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.908 ************************************ 00:10:29.908 END TEST nvmf_bdevio 00:10:29.908 ************************************ 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:29.908 00:10:29.908 real 5m2.961s 00:10:29.908 user 11m52.034s 00:10:29.908 sys 1m49.274s 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.908 ************************************ 00:10:29.908 END TEST nvmf_target_core 00:10:29.908 ************************************ 00:10:29.908 11:25:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:29.908 11:25:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.908 11:25:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.908 11:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.908 ************************************ 00:10:29.908 START TEST nvmf_target_extra 00:10:29.908 ************************************ 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:29.908 * Looking for test storage... 00:10:29.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.908 --rc genhtml_branch_coverage=1 00:10:29.908 --rc genhtml_function_coverage=1 00:10:29.908 --rc genhtml_legend=1 00:10:29.908 --rc geninfo_all_blocks=1 00:10:29.908 --rc geninfo_unexecuted_blocks=1 00:10:29.908 00:10:29.908 ' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.908 --rc genhtml_branch_coverage=1 00:10:29.908 --rc genhtml_function_coverage=1 00:10:29.908 --rc genhtml_legend=1 00:10:29.908 --rc geninfo_all_blocks=1 00:10:29.908 --rc geninfo_unexecuted_blocks=1 00:10:29.908 00:10:29.908 ' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.908 --rc genhtml_branch_coverage=1 00:10:29.908 --rc genhtml_function_coverage=1 00:10:29.908 --rc genhtml_legend=1 00:10:29.908 --rc geninfo_all_blocks=1 00:10:29.908 --rc geninfo_unexecuted_blocks=1 00:10:29.908 00:10:29.908 ' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.908 --rc genhtml_branch_coverage=1 00:10:29.908 --rc genhtml_function_coverage=1 00:10:29.908 --rc genhtml_legend=1 00:10:29.908 --rc geninfo_all_blocks=1 00:10:29.908 --rc geninfo_unexecuted_blocks=1 00:10:29.908 00:10:29.908 ' 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.908 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.909 ************************************ 00:10:29.909 START TEST nvmf_example 00:10:29.909 ************************************ 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:29.909 * Looking for test storage... 00:10:29.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.909 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.909 --rc genhtml_branch_coverage=1 00:10:29.909 --rc genhtml_function_coverage=1 00:10:29.909 --rc genhtml_legend=1 00:10:29.909 --rc geninfo_all_blocks=1 00:10:29.909 --rc geninfo_unexecuted_blocks=1 00:10:29.909 00:10:29.909 ' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.909 --rc genhtml_branch_coverage=1 00:10:29.909 --rc genhtml_function_coverage=1 00:10:29.909 --rc genhtml_legend=1 00:10:29.909 --rc geninfo_all_blocks=1 00:10:29.909 --rc geninfo_unexecuted_blocks=1 00:10:29.909 00:10:29.909 ' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.909 --rc genhtml_branch_coverage=1 00:10:29.909 --rc genhtml_function_coverage=1 00:10:29.909 --rc genhtml_legend=1 00:10:29.909 --rc geninfo_all_blocks=1 00:10:29.909 --rc geninfo_unexecuted_blocks=1 00:10:29.909 00:10:29.909 ' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.909 --rc genhtml_branch_coverage=1 00:10:29.909 --rc genhtml_function_coverage=1 00:10:29.909 --rc genhtml_legend=1 00:10:29.909 --rc geninfo_all_blocks=1 00:10:29.909 --rc geninfo_unexecuted_blocks=1 00:10:29.909 00:10:29.909 ' 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.909 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:29.910 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.171 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.311 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:38.312 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:38.312 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:38.312 Found net devices under 0000:31:00.0: cvl_0_0 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:38.312 Found net devices under 0000:31:00.1: cvl_0_1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:10:38.312 00:10:38.312 --- 10.0.0.2 ping statistics --- 00:10:38.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.312 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:10:38.312 00:10:38.312 --- 10.0.0.1 ping statistics --- 00:10:38.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.312 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:38.312 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3396166 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3396166 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3396166 ']' 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.313 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.313 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.313 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:38.313 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:38.313 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.313 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:38.574 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:50.811 Initializing NVMe Controllers 00:10:50.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:50.811 Initialization complete. Launching workers. 00:10:50.811 ======================================================== 00:10:50.811 Latency(us) 00:10:50.811 Device Information : IOPS MiB/s Average min max 00:10:50.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18075.13 70.61 3541.18 674.32 16369.43 00:10:50.811 ======================================================== 00:10:50.811 Total : 18075.13 70.61 3541.18 674.32 16369.43 00:10:50.811 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.811 rmmod nvme_tcp 00:10:50.811 rmmod nvme_fabrics 00:10:50.811 rmmod nvme_keyring 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3396166 ']' 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3396166 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3396166 ']' 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3396166 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396166 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396166' 00:10:50.811 killing process with pid 3396166 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3396166 00:10:50.811 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3396166 00:10:50.811 nvmf threads initialize successfully 00:10:50.811 bdev subsystem init successfully 00:10:50.811 created a nvmf target service 00:10:50.811 create targets's poll groups done 00:10:50.811 all subsystems of target started 00:10:50.811 nvmf target is running 00:10:50.811 all subsystems of target stopped 00:10:50.811 destroy targets's poll groups done 00:10:50.811 destroyed the nvmf target service 00:10:50.811 bdev subsystem finish successfully 00:10:50.811 nvmf threads destroy successfully 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.811 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.073 00:10:51.073 real 0m21.334s 00:10:51.073 user 0m46.534s 00:10:51.073 sys 0m6.859s 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.073 ************************************ 00:10:51.073 END TEST nvmf_example 00:10:51.073 ************************************ 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.073 11:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.337 ************************************ 00:10:51.337 START TEST nvmf_filesystem 00:10:51.337 ************************************ 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:51.337 * Looking for test storage... 00:10:51.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.337 --rc genhtml_branch_coverage=1 00:10:51.337 --rc genhtml_function_coverage=1 00:10:51.337 --rc genhtml_legend=1 00:10:51.337 --rc geninfo_all_blocks=1 00:10:51.337 --rc geninfo_unexecuted_blocks=1 00:10:51.337 00:10:51.337 ' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.337 --rc genhtml_branch_coverage=1 00:10:51.337 --rc genhtml_function_coverage=1 00:10:51.337 --rc genhtml_legend=1 00:10:51.337 --rc geninfo_all_blocks=1 00:10:51.337 --rc geninfo_unexecuted_blocks=1 00:10:51.337 00:10:51.337 ' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.337 --rc genhtml_branch_coverage=1 00:10:51.337 --rc genhtml_function_coverage=1 00:10:51.337 --rc genhtml_legend=1 00:10:51.337 --rc geninfo_all_blocks=1 00:10:51.337 --rc geninfo_unexecuted_blocks=1 00:10:51.337 00:10:51.337 ' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.337 --rc genhtml_branch_coverage=1 00:10:51.337 --rc genhtml_function_coverage=1 00:10:51.337 --rc genhtml_legend=1 00:10:51.337 --rc geninfo_all_blocks=1 00:10:51.337 --rc geninfo_unexecuted_blocks=1 00:10:51.337 00:10:51.337 ' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:51.337 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:51.338 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:51.338 #define SPDK_CONFIG_H 00:10:51.338 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:51.338 #define SPDK_CONFIG_APPS 1 00:10:51.338 #define SPDK_CONFIG_ARCH native 00:10:51.338 #undef SPDK_CONFIG_ASAN 00:10:51.338 #undef SPDK_CONFIG_AVAHI 00:10:51.338 #undef SPDK_CONFIG_CET 00:10:51.338 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:51.338 #define SPDK_CONFIG_COVERAGE 1 00:10:51.338 #define SPDK_CONFIG_CROSS_PREFIX 00:10:51.338 #undef SPDK_CONFIG_CRYPTO 00:10:51.339 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:51.339 #undef SPDK_CONFIG_CUSTOMOCF 00:10:51.339 #undef SPDK_CONFIG_DAOS 00:10:51.339 #define SPDK_CONFIG_DAOS_DIR 00:10:51.339 #define SPDK_CONFIG_DEBUG 1 00:10:51.339 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:51.339 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:51.339 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:51.339 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:51.339 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:51.339 #undef SPDK_CONFIG_DPDK_UADK 00:10:51.339 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:51.339 #define SPDK_CONFIG_EXAMPLES 1 00:10:51.339 #undef SPDK_CONFIG_FC 00:10:51.339 #define SPDK_CONFIG_FC_PATH 00:10:51.339 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:51.339 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:51.339 #define SPDK_CONFIG_FSDEV 1 00:10:51.339 #undef SPDK_CONFIG_FUSE 00:10:51.339 #undef SPDK_CONFIG_FUZZER 00:10:51.339 #define SPDK_CONFIG_FUZZER_LIB 00:10:51.339 #undef SPDK_CONFIG_GOLANG 00:10:51.339 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:51.339 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:51.339 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:51.339 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:51.339 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:51.339 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:51.339 #undef SPDK_CONFIG_HAVE_LZ4 00:10:51.339 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:51.339 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:51.339 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:51.339 #define SPDK_CONFIG_IDXD 1 00:10:51.339 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:51.339 #undef SPDK_CONFIG_IPSEC_MB 00:10:51.339 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:51.339 #define SPDK_CONFIG_ISAL 1 00:10:51.339 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:51.339 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:51.339 #define SPDK_CONFIG_LIBDIR 00:10:51.339 #undef SPDK_CONFIG_LTO 00:10:51.339 #define SPDK_CONFIG_MAX_LCORES 128 00:10:51.339 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:51.339 #define SPDK_CONFIG_NVME_CUSE 1 00:10:51.339 #undef SPDK_CONFIG_OCF 00:10:51.339 #define SPDK_CONFIG_OCF_PATH 00:10:51.339 #define SPDK_CONFIG_OPENSSL_PATH 00:10:51.339 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:51.339 #define SPDK_CONFIG_PGO_DIR 00:10:51.339 #undef SPDK_CONFIG_PGO_USE 00:10:51.339 #define SPDK_CONFIG_PREFIX /usr/local 00:10:51.339 #undef SPDK_CONFIG_RAID5F 00:10:51.339 #undef SPDK_CONFIG_RBD 00:10:51.339 #define SPDK_CONFIG_RDMA 1 00:10:51.339 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:51.339 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:51.339 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:51.339 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:51.339 #define SPDK_CONFIG_SHARED 1 00:10:51.339 #undef SPDK_CONFIG_SMA 00:10:51.339 #define SPDK_CONFIG_TESTS 1 00:10:51.339 #undef SPDK_CONFIG_TSAN 00:10:51.339 #define SPDK_CONFIG_UBLK 1 00:10:51.339 #define SPDK_CONFIG_UBSAN 1 00:10:51.339 #undef SPDK_CONFIG_UNIT_TESTS 00:10:51.339 #undef SPDK_CONFIG_URING 00:10:51.339 #define SPDK_CONFIG_URING_PATH 00:10:51.339 #undef SPDK_CONFIG_URING_ZNS 00:10:51.339 #undef SPDK_CONFIG_USDT 00:10:51.339 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:51.339 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:51.339 #define SPDK_CONFIG_VFIO_USER 1 00:10:51.339 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:51.339 #define SPDK_CONFIG_VHOST 1 00:10:51.339 #define SPDK_CONFIG_VIRTIO 1 00:10:51.339 #undef SPDK_CONFIG_VTUNE 00:10:51.339 #define SPDK_CONFIG_VTUNE_DIR 00:10:51.339 #define SPDK_CONFIG_WERROR 1 00:10:51.339 #define SPDK_CONFIG_WPDK_DIR 00:10:51.339 #undef SPDK_CONFIG_XNVME 00:10:51.339 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:51.339 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:51.605 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:51.606 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3398952 ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3398952 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xsTG7f 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xsTG7f/tests/target /tmp/spdk.xsTG7f 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122625429504 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356558336 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6731128832 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666910720 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847898112 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871314944 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23416832 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677826560 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678281216 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=454656 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935643136 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935655424 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:51.607 * Looking for test storage... 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122625429504 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8945721344 00:10:51.607 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.608 --rc genhtml_branch_coverage=1 00:10:51.608 --rc genhtml_function_coverage=1 00:10:51.608 --rc genhtml_legend=1 00:10:51.608 --rc geninfo_all_blocks=1 00:10:51.608 --rc geninfo_unexecuted_blocks=1 00:10:51.608 00:10:51.608 ' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.608 --rc genhtml_branch_coverage=1 00:10:51.608 --rc genhtml_function_coverage=1 00:10:51.608 --rc genhtml_legend=1 00:10:51.608 --rc geninfo_all_blocks=1 00:10:51.608 --rc geninfo_unexecuted_blocks=1 00:10:51.608 00:10:51.608 ' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.608 --rc genhtml_branch_coverage=1 00:10:51.608 --rc genhtml_function_coverage=1 00:10:51.608 --rc genhtml_legend=1 00:10:51.608 --rc geninfo_all_blocks=1 00:10:51.608 --rc geninfo_unexecuted_blocks=1 00:10:51.608 00:10:51.608 ' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.608 --rc genhtml_branch_coverage=1 00:10:51.608 --rc genhtml_function_coverage=1 00:10:51.608 --rc genhtml_legend=1 00:10:51.608 --rc geninfo_all_blocks=1 00:10:51.608 --rc geninfo_unexecuted_blocks=1 00:10:51.608 00:10:51.608 ' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.608 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.609 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.756 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.756 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.756 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:59.757 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:59.757 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:59.757 Found net devices under 0000:31:00.0: cvl_0_0 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:59.757 Found net devices under 0000:31:00.1: cvl_0_1 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.757 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:10:59.757 00:10:59.757 --- 10.0.0.2 ping statistics --- 00:10:59.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.757 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:10:59.757 00:10:59.757 --- 10.0.0.1 ping statistics --- 00:10:59.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.757 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:59.757 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 ************************************ 00:10:59.758 START TEST nvmf_filesystem_no_in_capsule 00:10:59.758 ************************************ 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3402679 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3402679 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3402679 ']' 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.758 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 [2024-12-09 11:25:51.298467] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:10:59.758 [2024-12-09 11:25:51.298520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.758 [2024-12-09 11:25:51.382001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.758 [2024-12-09 11:25:51.421375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.758 [2024-12-09 11:25:51.421411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.758 [2024-12-09 11:25:51.421419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.758 [2024-12-09 11:25:51.421426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.758 [2024-12-09 11:25:51.421432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.758 [2024-12-09 11:25:51.423300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.758 [2024-12-09 11:25:51.423414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.758 [2024-12-09 11:25:51.423570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.758 [2024-12-09 11:25:51.423571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 [2024-12-09 11:25:52.147346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.019 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.280 Malloc1 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.280 [2024-12-09 11:25:52.284028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:00.280 { 00:11:00.280 "name": "Malloc1", 00:11:00.280 "aliases": [ 00:11:00.280 "e295fc47-2063-4b37-9fbb-0155292144ec" 00:11:00.280 ], 00:11:00.280 "product_name": "Malloc disk", 00:11:00.280 "block_size": 512, 00:11:00.280 "num_blocks": 1048576, 00:11:00.280 "uuid": "e295fc47-2063-4b37-9fbb-0155292144ec", 00:11:00.280 "assigned_rate_limits": { 00:11:00.280 "rw_ios_per_sec": 0, 00:11:00.280 "rw_mbytes_per_sec": 0, 00:11:00.280 "r_mbytes_per_sec": 0, 00:11:00.280 "w_mbytes_per_sec": 0 00:11:00.280 }, 00:11:00.280 "claimed": true, 00:11:00.280 "claim_type": "exclusive_write", 00:11:00.280 "zoned": false, 00:11:00.280 "supported_io_types": { 00:11:00.280 "read": true, 00:11:00.280 "write": true, 00:11:00.280 "unmap": true, 00:11:00.280 "flush": true, 00:11:00.280 "reset": true, 00:11:00.280 "nvme_admin": false, 00:11:00.280 "nvme_io": false, 00:11:00.280 "nvme_io_md": false, 00:11:00.280 "write_zeroes": true, 00:11:00.280 "zcopy": true, 00:11:00.280 "get_zone_info": false, 00:11:00.280 "zone_management": false, 00:11:00.280 "zone_append": false, 00:11:00.280 "compare": false, 00:11:00.280 "compare_and_write": false, 00:11:00.280 "abort": true, 00:11:00.280 "seek_hole": false, 00:11:00.280 "seek_data": false, 00:11:00.280 "copy": true, 00:11:00.280 "nvme_iov_md": false 00:11:00.280 }, 00:11:00.280 "memory_domains": [ 00:11:00.280 { 00:11:00.280 "dma_device_id": "system", 00:11:00.280 "dma_device_type": 1 00:11:00.280 }, 00:11:00.280 { 00:11:00.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.280 "dma_device_type": 2 00:11:00.280 } 00:11:00.280 ], 00:11:00.280 "driver_specific": {} 00:11:00.280 } 00:11:00.280 ]' 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:00.280 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.194 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.194 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.194 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.194 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.194 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:04.105 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:04.367 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:04.938 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.324 ************************************ 00:11:06.324 START TEST filesystem_ext4 00:11:06.324 ************************************ 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:06.324 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:06.324 mke2fs 1.47.0 (5-Feb-2023) 00:11:06.324 Discarding device blocks: 0/522240 done 00:11:06.324 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:06.324 Filesystem UUID: bc4429c3-5788-4b9d-b613-aa179995786c 00:11:06.324 Superblock backups stored on blocks: 00:11:06.324 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:06.324 00:11:06.324 Allocating group tables: 0/64 done 00:11:06.324 Writing inode tables: 0/64 done 00:11:08.873 Creating journal (8192 blocks): done 00:11:08.873 Writing superblocks and filesystem accounting information: 0/64 done 00:11:08.873 00:11:08.873 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:08.873 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.457 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.457 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3402679 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.458 00:11:15.458 real 0m8.505s 00:11:15.458 user 0m0.031s 00:11:15.458 sys 0m0.075s 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 ************************************ 00:11:15.458 END TEST filesystem_ext4 00:11:15.458 ************************************ 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 ************************************ 00:11:15.458 START TEST filesystem_btrfs 00:11:15.458 ************************************ 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:15.458 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:15.458 btrfs-progs v6.8.1 00:11:15.458 See https://btrfs.readthedocs.io for more information. 00:11:15.458 00:11:15.458 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:15.458 NOTE: several default settings have changed in version 5.15, please make sure 00:11:15.458 this does not affect your deployments: 00:11:15.458 - DUP for metadata (-m dup) 00:11:15.458 - enabled no-holes (-O no-holes) 00:11:15.458 - enabled free-space-tree (-R free-space-tree) 00:11:15.458 00:11:15.458 Label: (null) 00:11:15.458 UUID: 48904dbd-6d17-4e72-8ea3-345e2f5ac35f 00:11:15.458 Node size: 16384 00:11:15.458 Sector size: 4096 (CPU page size: 4096) 00:11:15.458 Filesystem size: 510.00MiB 00:11:15.458 Block group profiles: 00:11:15.458 Data: single 8.00MiB 00:11:15.458 Metadata: DUP 32.00MiB 00:11:15.458 System: DUP 8.00MiB 00:11:15.458 SSD detected: yes 00:11:15.458 Zoned device: no 00:11:15.458 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:15.458 Checksum: crc32c 00:11:15.458 Number of devices: 1 00:11:15.458 Devices: 00:11:15.458 ID SIZE PATH 00:11:15.458 1 510.00MiB /dev/nvme0n1p1 00:11:15.458 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3402679 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.458 00:11:15.458 real 0m0.805s 00:11:15.458 user 0m0.035s 00:11:15.458 sys 0m0.113s 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 ************************************ 00:11:15.458 END TEST filesystem_btrfs 00:11:15.458 ************************************ 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 ************************************ 00:11:15.458 START TEST filesystem_xfs 00:11:15.458 ************************************ 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:15.458 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:15.719 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:15.719 = sectsz=512 attr=2, projid32bit=1 00:11:15.719 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:15.720 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:15.720 data = bsize=4096 blocks=130560, imaxpct=25 00:11:15.720 = sunit=0 swidth=0 blks 00:11:15.720 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:15.720 log =internal log bsize=4096 blocks=16384, version=2 00:11:15.720 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:15.720 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:16.291 Discarding blocks...Done. 00:11:16.291 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:16.291 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3402679 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.206 00:11:18.206 real 0m2.648s 00:11:18.206 user 0m0.025s 00:11:18.206 sys 0m0.083s 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:18.206 ************************************ 00:11:18.206 END TEST filesystem_xfs 00:11:18.206 ************************************ 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:18.206 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.467 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.467 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3402679 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3402679 ']' 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3402679 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3402679 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3402679' 00:11:18.468 killing process with pid 3402679 00:11:18.468 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3402679 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3402679 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:18.729 00:11:18.729 real 0m19.610s 00:11:18.729 user 1m17.589s 00:11:18.729 sys 0m1.418s 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.729 ************************************ 00:11:18.729 END TEST nvmf_filesystem_no_in_capsule 00:11:18.729 ************************************ 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.729 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 ************************************ 00:11:18.990 START TEST nvmf_filesystem_in_capsule 00:11:18.990 ************************************ 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3406907 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3406907 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3406907 ']' 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.990 [2024-12-09 11:26:10.982657] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:11:18.990 [2024-12-09 11:26:10.982708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.990 [2024-12-09 11:26:11.065070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.990 [2024-12-09 11:26:11.103210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.990 [2024-12-09 11:26:11.103257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.990 [2024-12-09 11:26:11.103265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.990 [2024-12-09 11:26:11.103272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.990 [2024-12-09 11:26:11.103278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.990 [2024-12-09 11:26:11.104838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.990 [2024-12-09 11:26:11.104971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.990 [2024-12-09 11:26:11.105128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.990 [2024-12-09 11:26:11.105219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 [2024-12-09 11:26:11.828448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 [2024-12-09 11:26:11.970022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.933 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:19.933 { 00:11:19.933 "name": "Malloc1", 00:11:19.933 "aliases": [ 00:11:19.933 "c436f05e-4801-4d3c-9050-9106bb95bb2b" 00:11:19.933 ], 00:11:19.933 "product_name": "Malloc disk", 00:11:19.933 "block_size": 512, 00:11:19.933 "num_blocks": 1048576, 00:11:19.933 "uuid": "c436f05e-4801-4d3c-9050-9106bb95bb2b", 00:11:19.933 "assigned_rate_limits": { 00:11:19.933 "rw_ios_per_sec": 0, 00:11:19.933 "rw_mbytes_per_sec": 0, 00:11:19.933 "r_mbytes_per_sec": 0, 00:11:19.933 "w_mbytes_per_sec": 0 00:11:19.933 }, 00:11:19.933 "claimed": true, 00:11:19.933 "claim_type": "exclusive_write", 00:11:19.933 "zoned": false, 00:11:19.933 "supported_io_types": { 00:11:19.933 "read": true, 00:11:19.933 "write": true, 00:11:19.933 "unmap": true, 00:11:19.933 "flush": true, 00:11:19.933 "reset": true, 00:11:19.933 "nvme_admin": false, 00:11:19.933 "nvme_io": false, 00:11:19.933 "nvme_io_md": false, 00:11:19.933 "write_zeroes": true, 00:11:19.933 "zcopy": true, 00:11:19.933 "get_zone_info": false, 00:11:19.933 "zone_management": false, 00:11:19.933 "zone_append": false, 00:11:19.933 "compare": false, 00:11:19.933 "compare_and_write": false, 00:11:19.933 "abort": true, 00:11:19.933 "seek_hole": false, 00:11:19.933 "seek_data": false, 00:11:19.933 "copy": true, 00:11:19.933 "nvme_iov_md": false 00:11:19.933 }, 00:11:19.933 "memory_domains": [ 00:11:19.933 { 00:11:19.933 "dma_device_id": "system", 00:11:19.933 "dma_device_type": 1 00:11:19.933 }, 00:11:19.933 { 00:11:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.933 "dma_device_type": 2 00:11:19.933 } 00:11:19.933 ], 00:11:19.933 "driver_specific": {} 00:11:19.933 } 00:11:19.933 ]' 00:11:19.933 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:19.933 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:19.933 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:20.195 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:20.195 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:20.195 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:20.195 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:20.195 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.580 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.580 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.580 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.580 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.580 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:23.496 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.757 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.330 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.273 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:25.273 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.273 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.273 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.274 ************************************ 00:11:25.274 START TEST filesystem_in_capsule_ext4 00:11:25.274 ************************************ 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:25.274 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:25.274 mke2fs 1.47.0 (5-Feb-2023) 00:11:25.274 Discarding device blocks: 0/522240 done 00:11:25.274 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:25.274 Filesystem UUID: 9df8afdb-c10b-44f6-b556-008eb7aa216d 00:11:25.274 Superblock backups stored on blocks: 00:11:25.274 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:25.274 00:11:25.274 Allocating group tables: 0/64 done 00:11:25.274 Writing inode tables: 0/64 done 00:11:27.820 Creating journal (8192 blocks): done 00:11:28.081 Writing superblocks and filesystem accounting information: 0/64 done 00:11:28.081 00:11:28.081 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:28.081 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3406907 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.374 00:11:33.374 real 0m8.152s 00:11:33.374 user 0m0.030s 00:11:33.374 sys 0m0.082s 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.374 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:33.374 ************************************ 00:11:33.374 END TEST filesystem_in_capsule_ext4 00:11:33.374 ************************************ 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.635 ************************************ 00:11:33.635 START TEST filesystem_in_capsule_btrfs 00:11:33.635 ************************************ 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:33.635 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:33.896 btrfs-progs v6.8.1 00:11:33.896 See https://btrfs.readthedocs.io for more information. 00:11:33.896 00:11:33.896 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:33.896 NOTE: several default settings have changed in version 5.15, please make sure 00:11:33.896 this does not affect your deployments: 00:11:33.896 - DUP for metadata (-m dup) 00:11:33.896 - enabled no-holes (-O no-holes) 00:11:33.896 - enabled free-space-tree (-R free-space-tree) 00:11:33.896 00:11:33.896 Label: (null) 00:11:33.896 UUID: c849c55c-057a-43fd-9655-57b42cd3fc01 00:11:33.896 Node size: 16384 00:11:33.896 Sector size: 4096 (CPU page size: 4096) 00:11:33.896 Filesystem size: 510.00MiB 00:11:33.896 Block group profiles: 00:11:33.896 Data: single 8.00MiB 00:11:33.896 Metadata: DUP 32.00MiB 00:11:33.896 System: DUP 8.00MiB 00:11:33.896 SSD detected: yes 00:11:33.896 Zoned device: no 00:11:33.896 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:33.896 Checksum: crc32c 00:11:33.896 Number of devices: 1 00:11:33.896 Devices: 00:11:33.896 ID SIZE PATH 00:11:33.896 1 510.00MiB /dev/nvme0n1p1 00:11:33.896 00:11:33.896 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:33.896 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:34.158 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3406907 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.420 00:11:34.420 real 0m0.779s 00:11:34.420 user 0m0.030s 00:11:34.420 sys 0m0.116s 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.420 ************************************ 00:11:34.420 END TEST filesystem_in_capsule_btrfs 00:11:34.420 ************************************ 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.420 ************************************ 00:11:34.420 START TEST filesystem_in_capsule_xfs 00:11:34.420 ************************************ 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:34.420 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.420 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.420 = sectsz=512 attr=2, projid32bit=1 00:11:34.420 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.420 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.420 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.420 = sunit=0 swidth=0 blks 00:11:34.420 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.420 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.420 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.420 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:35.363 Discarding blocks...Done. 00:11:35.363 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:35.363 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.912 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3406907 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.912 00:11:37.912 real 0m3.635s 00:11:37.912 user 0m0.026s 00:11:37.912 sys 0m0.084s 00:11:37.912 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.913 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.913 ************************************ 00:11:37.913 END TEST filesystem_in_capsule_xfs 00:11:37.913 ************************************ 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3406907 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3406907 ']' 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3406907 00:11:38.174 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406907 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406907' 00:11:38.436 killing process with pid 3406907 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3406907 00:11:38.436 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3406907 00:11:38.698 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:38.698 00:11:38.698 real 0m19.694s 00:11:38.698 user 1m17.886s 00:11:38.698 sys 0m1.438s 00:11:38.698 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.698 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.698 ************************************ 00:11:38.698 END TEST nvmf_filesystem_in_capsule 00:11:38.699 ************************************ 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.699 rmmod nvme_tcp 00:11:38.699 rmmod nvme_fabrics 00:11:38.699 rmmod nvme_keyring 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.699 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.251 00:11:41.251 real 0m49.552s 00:11:41.251 user 2m37.808s 00:11:41.251 sys 0m8.738s 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.251 ************************************ 00:11:41.251 END TEST nvmf_filesystem 00:11:41.251 ************************************ 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.251 ************************************ 00:11:41.251 START TEST nvmf_target_discovery 00:11:41.251 ************************************ 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:41.251 * Looking for test storage... 00:11:41.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.251 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.251 --rc genhtml_branch_coverage=1 00:11:41.251 --rc genhtml_function_coverage=1 00:11:41.251 --rc genhtml_legend=1 00:11:41.251 --rc geninfo_all_blocks=1 00:11:41.251 --rc geninfo_unexecuted_blocks=1 00:11:41.251 00:11:41.251 ' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.251 --rc genhtml_branch_coverage=1 00:11:41.251 --rc genhtml_function_coverage=1 00:11:41.251 --rc genhtml_legend=1 00:11:41.251 --rc geninfo_all_blocks=1 00:11:41.251 --rc geninfo_unexecuted_blocks=1 00:11:41.251 00:11:41.251 ' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.251 --rc genhtml_branch_coverage=1 00:11:41.251 --rc genhtml_function_coverage=1 00:11:41.251 --rc genhtml_legend=1 00:11:41.251 --rc geninfo_all_blocks=1 00:11:41.251 --rc geninfo_unexecuted_blocks=1 00:11:41.251 00:11:41.251 ' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.251 --rc genhtml_branch_coverage=1 00:11:41.251 --rc genhtml_function_coverage=1 00:11:41.251 --rc genhtml_legend=1 00:11:41.251 --rc geninfo_all_blocks=1 00:11:41.251 --rc geninfo_unexecuted_blocks=1 00:11:41.251 00:11:41.251 ' 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.251 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.252 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:49.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.405 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:49.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:49.406 Found net devices under 0000:31:00.0: cvl_0_0 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:49.406 Found net devices under 0000:31:00.1: cvl_0_1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:11:49.406 00:11:49.406 --- 10.0.0.2 ping statistics --- 00:11:49.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.406 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:11:49.406 00:11:49.406 --- 10.0.0.1 ping statistics --- 00:11:49.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.406 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3415235 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3415235 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.406 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3415235 ']' 00:11:49.407 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.407 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.407 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.407 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.407 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 [2024-12-09 11:26:40.552822] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:11:49.407 [2024-12-09 11:26:40.552879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.407 [2024-12-09 11:26:40.634330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.407 [2024-12-09 11:26:40.673009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.407 [2024-12-09 11:26:40.673045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.407 [2024-12-09 11:26:40.673053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.407 [2024-12-09 11:26:40.673060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.407 [2024-12-09 11:26:40.673066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.407 [2024-12-09 11:26:40.674625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.407 [2024-12-09 11:26:40.674742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.407 [2024-12-09 11:26:40.674899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.407 [2024-12-09 11:26:40.674900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 [2024-12-09 11:26:41.402247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 Null1 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 [2024-12-09 11:26:41.462590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 Null2 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 Null3 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.407 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 Null4 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:49.670 00:11:49.670 Discovery Log Number of Records 6, Generation counter 6 00:11:49.670 =====Discovery Log Entry 0====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: current discovery subsystem 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4420 00:11:49.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: explicit discovery connections, duplicate discovery information 00:11:49.670 sectype: none 00:11:49.670 =====Discovery Log Entry 1====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: nvme subsystem 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4420 00:11:49.670 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: none 00:11:49.670 sectype: none 00:11:49.670 =====Discovery Log Entry 2====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: nvme subsystem 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4420 00:11:49.670 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: none 00:11:49.670 sectype: none 00:11:49.670 =====Discovery Log Entry 3====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: nvme subsystem 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4420 00:11:49.670 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: none 00:11:49.670 sectype: none 00:11:49.670 =====Discovery Log Entry 4====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: nvme subsystem 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4420 00:11:49.670 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: none 00:11:49.670 sectype: none 00:11:49.670 =====Discovery Log Entry 5====== 00:11:49.670 trtype: tcp 00:11:49.670 adrfam: ipv4 00:11:49.670 subtype: discovery subsystem referral 00:11:49.670 treq: not required 00:11:49.670 portid: 0 00:11:49.670 trsvcid: 4430 00:11:49.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.670 traddr: 10.0.0.2 00:11:49.670 eflags: none 00:11:49.670 sectype: none 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:49.670 Perform nvmf subsystem discovery via RPC 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.670 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.670 [ 00:11:49.670 { 00:11:49.670 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:49.670 "subtype": "Discovery", 00:11:49.670 "listen_addresses": [ 00:11:49.670 { 00:11:49.670 "trtype": "TCP", 00:11:49.670 "adrfam": "IPv4", 00:11:49.670 "traddr": "10.0.0.2", 00:11:49.670 "trsvcid": "4420" 00:11:49.670 } 00:11:49.671 ], 00:11:49.671 "allow_any_host": true, 00:11:49.671 "hosts": [] 00:11:49.671 }, 00:11:49.671 { 00:11:49.671 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.671 "subtype": "NVMe", 00:11:49.671 "listen_addresses": [ 00:11:49.671 { 00:11:49.671 "trtype": "TCP", 00:11:49.671 "adrfam": "IPv4", 00:11:49.671 "traddr": "10.0.0.2", 00:11:49.671 "trsvcid": "4420" 00:11:49.671 } 00:11:49.671 ], 00:11:49.671 "allow_any_host": true, 00:11:49.671 "hosts": [], 00:11:49.671 "serial_number": "SPDK00000000000001", 00:11:49.671 "model_number": "SPDK bdev Controller", 00:11:49.671 "max_namespaces": 32, 00:11:49.671 "min_cntlid": 1, 00:11:49.671 "max_cntlid": 65519, 00:11:49.671 "namespaces": [ 00:11:49.671 { 00:11:49.671 "nsid": 1, 00:11:49.671 "bdev_name": "Null1", 00:11:49.671 "name": "Null1", 00:11:49.671 "nguid": "304B2623F1A64055BF0393EE860C7A29", 00:11:49.671 "uuid": "304b2623-f1a6-4055-bf03-93ee860c7a29" 00:11:49.671 } 00:11:49.671 ] 00:11:49.671 }, 00:11:49.671 { 00:11:49.671 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:49.671 "subtype": "NVMe", 00:11:49.671 "listen_addresses": [ 00:11:49.671 { 00:11:49.671 "trtype": "TCP", 00:11:49.671 "adrfam": "IPv4", 00:11:49.671 "traddr": "10.0.0.2", 00:11:49.671 "trsvcid": "4420" 00:11:49.671 } 00:11:49.671 ], 00:11:49.671 "allow_any_host": true, 00:11:49.671 "hosts": [], 00:11:49.671 "serial_number": "SPDK00000000000002", 00:11:49.671 "model_number": "SPDK bdev Controller", 00:11:49.671 "max_namespaces": 32, 00:11:49.671 "min_cntlid": 1, 00:11:49.671 "max_cntlid": 65519, 00:11:49.671 "namespaces": [ 00:11:49.671 { 00:11:49.671 "nsid": 1, 00:11:49.671 "bdev_name": "Null2", 00:11:49.671 "name": "Null2", 00:11:49.671 "nguid": "883BB2A864E747A68B859D285AB742B1", 00:11:49.671 "uuid": "883bb2a8-64e7-47a6-8b85-9d285ab742b1" 00:11:49.671 } 00:11:49.671 ] 00:11:49.671 }, 00:11:49.671 { 00:11:49.671 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:49.671 "subtype": "NVMe", 00:11:49.671 "listen_addresses": [ 00:11:49.671 { 00:11:49.671 "trtype": "TCP", 00:11:49.671 "adrfam": "IPv4", 00:11:49.671 "traddr": "10.0.0.2", 00:11:49.671 "trsvcid": "4420" 00:11:49.671 } 00:11:49.671 ], 00:11:49.671 "allow_any_host": true, 00:11:49.671 "hosts": [], 00:11:49.671 "serial_number": "SPDK00000000000003", 00:11:49.671 "model_number": "SPDK bdev Controller", 00:11:49.671 "max_namespaces": 32, 00:11:49.671 "min_cntlid": 1, 00:11:49.671 "max_cntlid": 65519, 00:11:49.671 "namespaces": [ 00:11:49.671 { 00:11:49.671 "nsid": 1, 00:11:49.671 "bdev_name": "Null3", 00:11:49.671 "name": "Null3", 00:11:49.671 "nguid": "FAF1434455AD46E5873E95552BF4FBAC", 00:11:49.671 "uuid": "faf14344-55ad-46e5-873e-95552bf4fbac" 00:11:49.671 } 00:11:49.671 ] 00:11:49.671 }, 00:11:49.671 { 00:11:49.671 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:49.671 "subtype": "NVMe", 00:11:49.671 "listen_addresses": [ 00:11:49.671 { 00:11:49.671 "trtype": "TCP", 00:11:49.671 "adrfam": "IPv4", 00:11:49.671 "traddr": "10.0.0.2", 00:11:49.671 "trsvcid": "4420" 00:11:49.671 } 00:11:49.671 ], 00:11:49.671 "allow_any_host": true, 00:11:49.671 "hosts": [], 00:11:49.671 "serial_number": "SPDK00000000000004", 00:11:49.671 "model_number": "SPDK bdev Controller", 00:11:49.671 "max_namespaces": 32, 00:11:49.671 "min_cntlid": 1, 00:11:49.671 "max_cntlid": 65519, 00:11:49.671 "namespaces": [ 00:11:49.671 { 00:11:49.671 "nsid": 1, 00:11:49.671 "bdev_name": "Null4", 00:11:49.671 "name": "Null4", 00:11:49.671 "nguid": "8BAAC5CC338A4A35BE57C1118419F098", 00:11:49.671 "uuid": "8baac5cc-338a-4a35-be57-c1118419f098" 00:11:49.671 } 00:11:49.671 ] 00:11:49.671 } 00:11:49.671 ] 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.671 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.934 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:49.935 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.935 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:49.935 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.935 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.935 rmmod nvme_tcp 00:11:49.935 rmmod nvme_fabrics 00:11:49.935 rmmod nvme_keyring 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3415235 ']' 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3415235 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3415235 ']' 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3415235 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.935 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3415235 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3415235' 00:11:50.196 killing process with pid 3415235 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3415235 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3415235 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.196 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.747 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.747 00:11:52.747 real 0m11.417s 00:11:52.747 user 0m8.599s 00:11:52.747 sys 0m5.856s 00:11:52.747 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.747 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.747 ************************************ 00:11:52.747 END TEST nvmf_target_discovery 00:11:52.747 ************************************ 00:11:52.747 11:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:52.747 11:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.748 ************************************ 00:11:52.748 START TEST nvmf_referrals 00:11:52.748 ************************************ 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:52.748 * Looking for test storage... 00:11:52.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.748 --rc genhtml_branch_coverage=1 00:11:52.748 --rc genhtml_function_coverage=1 00:11:52.748 --rc genhtml_legend=1 00:11:52.748 --rc geninfo_all_blocks=1 00:11:52.748 --rc geninfo_unexecuted_blocks=1 00:11:52.748 00:11:52.748 ' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.748 --rc genhtml_branch_coverage=1 00:11:52.748 --rc genhtml_function_coverage=1 00:11:52.748 --rc genhtml_legend=1 00:11:52.748 --rc geninfo_all_blocks=1 00:11:52.748 --rc geninfo_unexecuted_blocks=1 00:11:52.748 00:11:52.748 ' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.748 --rc genhtml_branch_coverage=1 00:11:52.748 --rc genhtml_function_coverage=1 00:11:52.748 --rc genhtml_legend=1 00:11:52.748 --rc geninfo_all_blocks=1 00:11:52.748 --rc geninfo_unexecuted_blocks=1 00:11:52.748 00:11:52.748 ' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.748 --rc genhtml_branch_coverage=1 00:11:52.748 --rc genhtml_function_coverage=1 00:11:52.748 --rc genhtml_legend=1 00:11:52.748 --rc geninfo_all_blocks=1 00:11:52.748 --rc geninfo_unexecuted_blocks=1 00:11:52.748 00:11:52.748 ' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.748 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.749 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:00.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:00.897 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:00.898 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:00.898 Found net devices under 0000:31:00.0: cvl_0_0 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:00.898 Found net devices under 0000:31:00.1: cvl_0_1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:12:00.898 00:12:00.898 --- 10.0.0.2 ping statistics --- 00:12:00.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.898 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:12:00.898 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:12:00.898 00:12:00.898 --- 10.0.0.1 ping statistics --- 00:12:00.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.898 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3419685 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3419685 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3419685 ']' 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.898 [2024-12-09 11:26:52.115254] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:12:00.898 [2024-12-09 11:26:52.115316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.898 [2024-12-09 11:26:52.199918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.898 [2024-12-09 11:26:52.241375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.898 [2024-12-09 11:26:52.241413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.898 [2024-12-09 11:26:52.241421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.898 [2024-12-09 11:26:52.241428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.898 [2024-12-09 11:26:52.241434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.898 [2024-12-09 11:26:52.243048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.898 [2024-12-09 11:26:52.243273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.898 [2024-12-09 11:26:52.243274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.898 [2024-12-09 11:26:52.243127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.898 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 [2024-12-09 11:26:52.963419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 [2024-12-09 11:26:52.992181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.899 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.161 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.422 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.684 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.945 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.945 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.205 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.467 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.728 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.990 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.251 rmmod nvme_tcp 00:12:03.251 rmmod nvme_fabrics 00:12:03.251 rmmod nvme_keyring 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3419685 ']' 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3419685 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3419685 ']' 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3419685 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3419685 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3419685' 00:12:03.251 killing process with pid 3419685 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3419685 00:12:03.251 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3419685 00:12:03.512 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.512 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.513 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.427 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.427 00:12:05.427 real 0m13.166s 00:12:05.427 user 0m16.021s 00:12:05.427 sys 0m6.348s 00:12:05.427 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.427 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.427 ************************************ 00:12:05.427 END TEST nvmf_referrals 00:12:05.427 ************************************ 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.688 ************************************ 00:12:05.688 START TEST nvmf_connect_disconnect 00:12:05.688 ************************************ 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.688 * Looking for test storage... 00:12:05.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.688 --rc genhtml_branch_coverage=1 00:12:05.688 --rc genhtml_function_coverage=1 00:12:05.688 --rc genhtml_legend=1 00:12:05.688 --rc geninfo_all_blocks=1 00:12:05.688 --rc geninfo_unexecuted_blocks=1 00:12:05.688 00:12:05.688 ' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.688 --rc genhtml_branch_coverage=1 00:12:05.688 --rc genhtml_function_coverage=1 00:12:05.688 --rc genhtml_legend=1 00:12:05.688 --rc geninfo_all_blocks=1 00:12:05.688 --rc geninfo_unexecuted_blocks=1 00:12:05.688 00:12:05.688 ' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.688 --rc genhtml_branch_coverage=1 00:12:05.688 --rc genhtml_function_coverage=1 00:12:05.688 --rc genhtml_legend=1 00:12:05.688 --rc geninfo_all_blocks=1 00:12:05.688 --rc geninfo_unexecuted_blocks=1 00:12:05.688 00:12:05.688 ' 00:12:05.688 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.688 --rc genhtml_branch_coverage=1 00:12:05.688 --rc genhtml_function_coverage=1 00:12:05.688 --rc genhtml_legend=1 00:12:05.688 --rc geninfo_all_blocks=1 00:12:05.689 --rc geninfo_unexecuted_blocks=1 00:12:05.689 00:12:05.689 ' 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.689 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.949 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.950 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:14.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:14.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:14.091 Found net devices under 0000:31:00.0: cvl_0_0 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:14.091 Found net devices under 0000:31:00.1: cvl_0_1 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.091 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:12:14.092 00:12:14.092 --- 10.0.0.2 ping statistics --- 00:12:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.092 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:14.092 00:12:14.092 --- 10.0.0.1 ping statistics --- 00:12:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.092 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3424929 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3424929 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3424929 ']' 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.092 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.092 [2024-12-09 11:27:05.500333] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:12:14.092 [2024-12-09 11:27:05.500398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.092 [2024-12-09 11:27:05.585922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.092 [2024-12-09 11:27:05.626933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.092 [2024-12-09 11:27:05.626976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.092 [2024-12-09 11:27:05.626984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.092 [2024-12-09 11:27:05.626991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.092 [2024-12-09 11:27:05.626997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.092 [2024-12-09 11:27:05.628651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.092 [2024-12-09 11:27:05.628770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.092 [2024-12-09 11:27:05.628933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.092 [2024-12-09 11:27:05.628933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.353 [2024-12-09 11:27:06.357068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.353 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.354 [2024-12-09 11:27:06.426490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:14.354 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:18.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.659 rmmod nvme_tcp 00:12:32.659 rmmod nvme_fabrics 00:12:32.659 rmmod nvme_keyring 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3424929 ']' 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3424929 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3424929 ']' 00:12:32.659 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3424929 00:12:32.660 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:32.660 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.660 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3424929 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3424929' 00:12:32.920 killing process with pid 3424929 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3424929 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3424929 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.920 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.469 00:12:35.469 real 0m29.406s 00:12:35.469 user 1m19.073s 00:12:35.469 sys 0m7.196s 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.469 ************************************ 00:12:35.469 END TEST nvmf_connect_disconnect 00:12:35.469 ************************************ 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.469 ************************************ 00:12:35.469 START TEST nvmf_multitarget 00:12:35.469 ************************************ 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:35.469 * Looking for test storage... 00:12:35.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.469 --rc genhtml_branch_coverage=1 00:12:35.469 --rc genhtml_function_coverage=1 00:12:35.469 --rc genhtml_legend=1 00:12:35.469 --rc geninfo_all_blocks=1 00:12:35.469 --rc geninfo_unexecuted_blocks=1 00:12:35.469 00:12:35.469 ' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.469 --rc genhtml_branch_coverage=1 00:12:35.469 --rc genhtml_function_coverage=1 00:12:35.469 --rc genhtml_legend=1 00:12:35.469 --rc geninfo_all_blocks=1 00:12:35.469 --rc geninfo_unexecuted_blocks=1 00:12:35.469 00:12:35.469 ' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.469 --rc genhtml_branch_coverage=1 00:12:35.469 --rc genhtml_function_coverage=1 00:12:35.469 --rc genhtml_legend=1 00:12:35.469 --rc geninfo_all_blocks=1 00:12:35.469 --rc geninfo_unexecuted_blocks=1 00:12:35.469 00:12:35.469 ' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.469 --rc genhtml_branch_coverage=1 00:12:35.469 --rc genhtml_function_coverage=1 00:12:35.469 --rc genhtml_legend=1 00:12:35.469 --rc geninfo_all_blocks=1 00:12:35.469 --rc geninfo_unexecuted_blocks=1 00:12:35.469 00:12:35.469 ' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.469 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.470 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.611 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:43.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:43.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:43.612 Found net devices under 0000:31:00.0: cvl_0_0 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:43.612 Found net devices under 0000:31:00.1: cvl_0_1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:12:43.612 00:12:43.612 --- 10.0.0.2 ping statistics --- 00:12:43.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.612 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:12:43.612 00:12:43.612 --- 10.0.0.1 ping statistics --- 00:12:43.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.612 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3433567 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3433567 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3433567 ']' 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.612 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.612 [2024-12-09 11:27:34.958357] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:12:43.612 [2024-12-09 11:27:34.958455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.612 [2024-12-09 11:27:35.044683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.612 [2024-12-09 11:27:35.086317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.612 [2024-12-09 11:27:35.086355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.612 [2024-12-09 11:27:35.086363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.612 [2024-12-09 11:27:35.086369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.612 [2024-12-09 11:27:35.086376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.612 [2024-12-09 11:27:35.087974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.612 [2024-12-09 11:27:35.088118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.612 [2024-12-09 11:27:35.088179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.612 [2024-12-09 11:27:35.088180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.612 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.612 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:43.612 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.612 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.612 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:43.873 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:43.873 "nvmf_tgt_1" 00:12:43.873 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:44.133 "nvmf_tgt_2" 00:12:44.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:44.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:44.133 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:44.393 true 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:44.393 true 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.393 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.393 rmmod nvme_tcp 00:12:44.653 rmmod nvme_fabrics 00:12:44.653 rmmod nvme_keyring 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3433567 ']' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3433567 ']' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3433567' 00:12:44.653 killing process with pid 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3433567 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.653 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.200 00:12:47.200 real 0m11.749s 00:12:47.200 user 0m9.758s 00:12:47.200 sys 0m6.148s 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.200 ************************************ 00:12:47.200 END TEST nvmf_multitarget 00:12:47.200 ************************************ 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.200 ************************************ 00:12:47.200 START TEST nvmf_rpc 00:12:47.200 ************************************ 00:12:47.200 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.200 * Looking for test storage... 00:12:47.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.200 --rc genhtml_branch_coverage=1 00:12:47.200 --rc genhtml_function_coverage=1 00:12:47.200 --rc genhtml_legend=1 00:12:47.200 --rc geninfo_all_blocks=1 00:12:47.200 --rc geninfo_unexecuted_blocks=1 00:12:47.200 00:12:47.200 ' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.200 --rc genhtml_branch_coverage=1 00:12:47.200 --rc genhtml_function_coverage=1 00:12:47.200 --rc genhtml_legend=1 00:12:47.200 --rc geninfo_all_blocks=1 00:12:47.200 --rc geninfo_unexecuted_blocks=1 00:12:47.200 00:12:47.200 ' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.200 --rc genhtml_branch_coverage=1 00:12:47.200 --rc genhtml_function_coverage=1 00:12:47.200 --rc genhtml_legend=1 00:12:47.200 --rc geninfo_all_blocks=1 00:12:47.200 --rc geninfo_unexecuted_blocks=1 00:12:47.200 00:12:47.200 ' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.200 --rc genhtml_branch_coverage=1 00:12:47.200 --rc genhtml_function_coverage=1 00:12:47.200 --rc genhtml_legend=1 00:12:47.200 --rc geninfo_all_blocks=1 00:12:47.200 --rc geninfo_unexecuted_blocks=1 00:12:47.200 00:12:47.200 ' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.200 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.201 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:55.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:55.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:55.345 Found net devices under 0000:31:00.0: cvl_0_0 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:55.345 Found net devices under 0000:31:00.1: cvl_0_1 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.345 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:12:55.346 00:12:55.346 --- 10.0.0.2 ping statistics --- 00:12:55.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.346 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:12:55.346 00:12:55.346 --- 10.0.0.1 ping statistics --- 00:12:55.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.346 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3438091 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3438091 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3438091 ']' 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.346 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.346 [2024-12-09 11:27:46.687811] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:12:55.346 [2024-12-09 11:27:46.687875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.346 [2024-12-09 11:27:46.772608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.346 [2024-12-09 11:27:46.814405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.346 [2024-12-09 11:27:46.814440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.346 [2024-12-09 11:27:46.814448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.346 [2024-12-09 11:27:46.814455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.346 [2024-12-09 11:27:46.814460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.346 [2024-12-09 11:27:46.816338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.346 [2024-12-09 11:27:46.816461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.346 [2024-12-09 11:27:46.816618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.346 [2024-12-09 11:27:46.816618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.346 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.346 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:55.346 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.346 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.346 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.607 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:55.607 "tick_rate": 2400000000, 00:12:55.607 "poll_groups": [ 00:12:55.607 { 00:12:55.607 "name": "nvmf_tgt_poll_group_000", 00:12:55.607 "admin_qpairs": 0, 00:12:55.607 "io_qpairs": 0, 00:12:55.607 "current_admin_qpairs": 0, 00:12:55.607 "current_io_qpairs": 0, 00:12:55.607 "pending_bdev_io": 0, 00:12:55.607 "completed_nvme_io": 0, 00:12:55.607 "transports": [] 00:12:55.607 }, 00:12:55.607 { 00:12:55.607 "name": "nvmf_tgt_poll_group_001", 00:12:55.607 "admin_qpairs": 0, 00:12:55.607 "io_qpairs": 0, 00:12:55.607 "current_admin_qpairs": 0, 00:12:55.607 "current_io_qpairs": 0, 00:12:55.607 "pending_bdev_io": 0, 00:12:55.607 "completed_nvme_io": 0, 00:12:55.607 "transports": [] 00:12:55.607 }, 00:12:55.607 { 00:12:55.607 "name": "nvmf_tgt_poll_group_002", 00:12:55.607 "admin_qpairs": 0, 00:12:55.607 "io_qpairs": 0, 00:12:55.607 "current_admin_qpairs": 0, 00:12:55.607 "current_io_qpairs": 0, 00:12:55.607 "pending_bdev_io": 0, 00:12:55.607 "completed_nvme_io": 0, 00:12:55.607 "transports": [] 00:12:55.607 }, 00:12:55.607 { 00:12:55.607 "name": "nvmf_tgt_poll_group_003", 00:12:55.607 "admin_qpairs": 0, 00:12:55.607 "io_qpairs": 0, 00:12:55.607 "current_admin_qpairs": 0, 00:12:55.607 "current_io_qpairs": 0, 00:12:55.607 "pending_bdev_io": 0, 00:12:55.607 "completed_nvme_io": 0, 00:12:55.607 "transports": [] 00:12:55.607 } 00:12:55.608 ] 00:12:55.608 }' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.608 [2024-12-09 11:27:47.661067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:55.608 "tick_rate": 2400000000, 00:12:55.608 "poll_groups": [ 00:12:55.608 { 00:12:55.608 "name": "nvmf_tgt_poll_group_000", 00:12:55.608 "admin_qpairs": 0, 00:12:55.608 "io_qpairs": 0, 00:12:55.608 "current_admin_qpairs": 0, 00:12:55.608 "current_io_qpairs": 0, 00:12:55.608 "pending_bdev_io": 0, 00:12:55.608 "completed_nvme_io": 0, 00:12:55.608 "transports": [ 00:12:55.608 { 00:12:55.608 "trtype": "TCP" 00:12:55.608 } 00:12:55.608 ] 00:12:55.608 }, 00:12:55.608 { 00:12:55.608 "name": "nvmf_tgt_poll_group_001", 00:12:55.608 "admin_qpairs": 0, 00:12:55.608 "io_qpairs": 0, 00:12:55.608 "current_admin_qpairs": 0, 00:12:55.608 "current_io_qpairs": 0, 00:12:55.608 "pending_bdev_io": 0, 00:12:55.608 "completed_nvme_io": 0, 00:12:55.608 "transports": [ 00:12:55.608 { 00:12:55.608 "trtype": "TCP" 00:12:55.608 } 00:12:55.608 ] 00:12:55.608 }, 00:12:55.608 { 00:12:55.608 "name": "nvmf_tgt_poll_group_002", 00:12:55.608 "admin_qpairs": 0, 00:12:55.608 "io_qpairs": 0, 00:12:55.608 "current_admin_qpairs": 0, 00:12:55.608 "current_io_qpairs": 0, 00:12:55.608 "pending_bdev_io": 0, 00:12:55.608 "completed_nvme_io": 0, 00:12:55.608 "transports": [ 00:12:55.608 { 00:12:55.608 "trtype": "TCP" 00:12:55.608 } 00:12:55.608 ] 00:12:55.608 }, 00:12:55.608 { 00:12:55.608 "name": "nvmf_tgt_poll_group_003", 00:12:55.608 "admin_qpairs": 0, 00:12:55.608 "io_qpairs": 0, 00:12:55.608 "current_admin_qpairs": 0, 00:12:55.608 "current_io_qpairs": 0, 00:12:55.608 "pending_bdev_io": 0, 00:12:55.608 "completed_nvme_io": 0, 00:12:55.608 "transports": [ 00:12:55.608 { 00:12:55.608 "trtype": "TCP" 00:12:55.608 } 00:12:55.608 ] 00:12:55.608 } 00:12:55.608 ] 00:12:55.608 }' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.608 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 Malloc1 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 [2024-12-09 11:27:47.861507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:55.869 [2024-12-09 11:27:47.898466] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:55.869 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.869 could not add new controller: failed to write to nvme-fabrics device 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.869 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.256 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.256 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:57.256 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.256 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:57.256 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.805 [2024-12-09 11:27:51.623999] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:59.805 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:59.805 could not add new controller: failed to write to nvme-fabrics device 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.805 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.191 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.191 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.191 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.191 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.191 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.108 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.370 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 [2024-12-09 11:27:55.353509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.756 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.756 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.756 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.756 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.756 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.299 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 [2024-12-09 11:27:59.061074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.299 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.683 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.683 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.683 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.683 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.683 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:10.597 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 [2024-12-09 11:28:02.821467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.859 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.247 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.247 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.247 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.247 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.247 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 [2024-12-09 11:28:06.544511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.797 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.184 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.184 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.184 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.184 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.184 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.101 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 [2024-12-09 11:28:10.277002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.362 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.749 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.749 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.749 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.749 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:19.749 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.299 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.299 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 [2024-12-09 11:28:14.038734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 [2024-12-09 11:28:14.098862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 [2024-12-09 11:28:14.163021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 [2024-12-09 11:28:14.235260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 [2024-12-09 11:28:14.299476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.300 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:22.301 "tick_rate": 2400000000, 00:13:22.301 "poll_groups": [ 00:13:22.301 { 00:13:22.301 "name": "nvmf_tgt_poll_group_000", 00:13:22.301 "admin_qpairs": 0, 00:13:22.301 "io_qpairs": 224, 00:13:22.301 "current_admin_qpairs": 0, 00:13:22.301 "current_io_qpairs": 0, 00:13:22.301 "pending_bdev_io": 0, 00:13:22.301 "completed_nvme_io": 518, 00:13:22.301 "transports": [ 00:13:22.301 { 00:13:22.301 "trtype": "TCP" 00:13:22.301 } 00:13:22.301 ] 00:13:22.301 }, 00:13:22.301 { 00:13:22.301 "name": "nvmf_tgt_poll_group_001", 00:13:22.301 "admin_qpairs": 1, 00:13:22.301 "io_qpairs": 223, 00:13:22.301 "current_admin_qpairs": 0, 00:13:22.301 "current_io_qpairs": 0, 00:13:22.301 "pending_bdev_io": 0, 00:13:22.301 "completed_nvme_io": 224, 00:13:22.301 "transports": [ 00:13:22.301 { 00:13:22.301 "trtype": "TCP" 00:13:22.301 } 00:13:22.301 ] 00:13:22.301 }, 00:13:22.301 { 00:13:22.301 "name": "nvmf_tgt_poll_group_002", 00:13:22.301 "admin_qpairs": 6, 00:13:22.301 "io_qpairs": 218, 00:13:22.301 "current_admin_qpairs": 0, 00:13:22.301 "current_io_qpairs": 0, 00:13:22.301 "pending_bdev_io": 0, 00:13:22.301 "completed_nvme_io": 224, 00:13:22.301 "transports": [ 00:13:22.301 { 00:13:22.301 "trtype": "TCP" 00:13:22.301 } 00:13:22.301 ] 00:13:22.301 }, 00:13:22.301 { 00:13:22.301 "name": "nvmf_tgt_poll_group_003", 00:13:22.301 "admin_qpairs": 0, 00:13:22.301 "io_qpairs": 224, 00:13:22.301 "current_admin_qpairs": 0, 00:13:22.301 "current_io_qpairs": 0, 00:13:22.301 "pending_bdev_io": 0, 00:13:22.301 "completed_nvme_io": 273, 00:13:22.301 "transports": [ 00:13:22.301 { 00:13:22.301 "trtype": "TCP" 00:13:22.301 } 00:13:22.301 ] 00:13:22.301 } 00:13:22.301 ] 00:13:22.301 }' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.301 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.562 rmmod nvme_tcp 00:13:22.562 rmmod nvme_fabrics 00:13:22.562 rmmod nvme_keyring 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3438091 ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3438091 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3438091 ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3438091 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3438091 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.562 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3438091' 00:13:22.562 killing process with pid 3438091 00:13:22.563 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3438091 00:13:22.563 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3438091 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.824 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.743 00:13:24.743 real 0m37.858s 00:13:24.743 user 1m53.588s 00:13:24.743 sys 0m7.790s 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.743 ************************************ 00:13:24.743 END TEST nvmf_rpc 00:13:24.743 ************************************ 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.743 ************************************ 00:13:24.743 START TEST nvmf_invalid 00:13:24.743 ************************************ 00:13:24.743 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.006 * Looking for test storage... 00:13:25.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.006 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:25.006 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:25.006 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.006 --rc genhtml_branch_coverage=1 00:13:25.006 --rc genhtml_function_coverage=1 00:13:25.006 --rc genhtml_legend=1 00:13:25.006 --rc geninfo_all_blocks=1 00:13:25.006 --rc geninfo_unexecuted_blocks=1 00:13:25.006 00:13:25.006 ' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.006 --rc genhtml_branch_coverage=1 00:13:25.006 --rc genhtml_function_coverage=1 00:13:25.006 --rc genhtml_legend=1 00:13:25.006 --rc geninfo_all_blocks=1 00:13:25.006 --rc geninfo_unexecuted_blocks=1 00:13:25.006 00:13:25.006 ' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.006 --rc genhtml_branch_coverage=1 00:13:25.006 --rc genhtml_function_coverage=1 00:13:25.006 --rc genhtml_legend=1 00:13:25.006 --rc geninfo_all_blocks=1 00:13:25.006 --rc geninfo_unexecuted_blocks=1 00:13:25.006 00:13:25.006 ' 00:13:25.006 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.006 --rc genhtml_branch_coverage=1 00:13:25.006 --rc genhtml_function_coverage=1 00:13:25.006 --rc genhtml_legend=1 00:13:25.006 --rc geninfo_all_blocks=1 00:13:25.006 --rc geninfo_unexecuted_blocks=1 00:13:25.006 00:13:25.006 ' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.007 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:33.158 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:33.159 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:33.159 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:33.159 Found net devices under 0000:31:00.0: cvl_0_0 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:33.159 Found net devices under 0000:31:00.1: cvl_0_1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:33.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:13:33.159 00:13:33.159 --- 10.0.0.2 ping statistics --- 00:13:33.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.159 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:13:33.159 00:13:33.159 --- 10.0.0.1 ping statistics --- 00:13:33.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.159 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3447983 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3447983 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3447983 ']' 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.159 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.160 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.160 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.160 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.160 [2024-12-09 11:28:24.788000] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:13:33.160 [2024-12-09 11:28:24.788078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.160 [2024-12-09 11:28:24.873317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.160 [2024-12-09 11:28:24.915242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.160 [2024-12-09 11:28:24.915278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.160 [2024-12-09 11:28:24.915286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.160 [2024-12-09 11:28:24.915293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.160 [2024-12-09 11:28:24.915299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.160 [2024-12-09 11:28:24.917084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.160 [2024-12-09 11:28:24.917251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.160 [2024-12-09 11:28:24.917391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.160 [2024-12-09 11:28:24.917391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30690 00:13:33.731 [2024-12-09 11:28:25.781874] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:33.731 { 00:13:33.731 "nqn": "nqn.2016-06.io.spdk:cnode30690", 00:13:33.731 "tgt_name": "foobar", 00:13:33.731 "method": "nvmf_create_subsystem", 00:13:33.731 "req_id": 1 00:13:33.731 } 00:13:33.731 Got JSON-RPC error response 00:13:33.731 response: 00:13:33.731 { 00:13:33.731 "code": -32603, 00:13:33.731 "message": "Unable to find target foobar" 00:13:33.731 }' 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:33.731 { 00:13:33.731 "nqn": "nqn.2016-06.io.spdk:cnode30690", 00:13:33.731 "tgt_name": "foobar", 00:13:33.731 "method": "nvmf_create_subsystem", 00:13:33.731 "req_id": 1 00:13:33.731 } 00:13:33.731 Got JSON-RPC error response 00:13:33.731 response: 00:13:33.731 { 00:13:33.731 "code": -32603, 00:13:33.731 "message": "Unable to find target foobar" 00:13:33.731 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:33.731 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1822 00:13:33.991 [2024-12-09 11:28:25.970539] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1822: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:33.991 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:33.991 { 00:13:33.991 "nqn": "nqn.2016-06.io.spdk:cnode1822", 00:13:33.991 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.991 "method": "nvmf_create_subsystem", 00:13:33.991 "req_id": 1 00:13:33.991 } 00:13:33.991 Got JSON-RPC error response 00:13:33.991 response: 00:13:33.991 { 00:13:33.991 "code": -32602, 00:13:33.991 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.991 }' 00:13:33.991 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:33.991 { 00:13:33.991 "nqn": "nqn.2016-06.io.spdk:cnode1822", 00:13:33.991 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:33.991 "method": "nvmf_create_subsystem", 00:13:33.991 "req_id": 1 00:13:33.991 } 00:13:33.991 Got JSON-RPC error response 00:13:33.991 response: 00:13:33.991 { 00:13:33.991 "code": -32602, 00:13:33.991 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:33.991 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:33.991 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:33.991 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18650 00:13:34.252 [2024-12-09 11:28:26.155064] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18650: invalid model number 'SPDK_Controller' 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:34.252 { 00:13:34.252 "nqn": "nqn.2016-06.io.spdk:cnode18650", 00:13:34.252 "model_number": "SPDK_Controller\u001f", 00:13:34.252 "method": "nvmf_create_subsystem", 00:13:34.252 "req_id": 1 00:13:34.252 } 00:13:34.252 Got JSON-RPC error response 00:13:34.252 response: 00:13:34.252 { 00:13:34.252 "code": -32602, 00:13:34.252 "message": "Invalid MN SPDK_Controller\u001f" 00:13:34.252 }' 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:34.252 { 00:13:34.252 "nqn": "nqn.2016-06.io.spdk:cnode18650", 00:13:34.252 "model_number": "SPDK_Controller\u001f", 00:13:34.252 "method": "nvmf_create_subsystem", 00:13:34.252 "req_id": 1 00:13:34.252 } 00:13:34.252 Got JSON-RPC error response 00:13:34.252 response: 00:13:34.252 { 00:13:34.252 "code": -32602, 00:13:34.252 "message": "Invalid MN SPDK_Controller\u001f" 00:13:34.252 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.252 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H+vh)o='\''x<-?Nw[e;GmZw' 00:13:34.253 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'H+vh)o='\''x<-?Nw[e;GmZw' nqn.2016-06.io.spdk:cnode28817 00:13:34.515 [2024-12-09 11:28:26.508196] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28817: invalid serial number 'H+vh)o='x<-?Nw[e;GmZw' 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:34.515 { 00:13:34.515 "nqn": "nqn.2016-06.io.spdk:cnode28817", 00:13:34.515 "serial_number": "H+vh)o='\''x<-?Nw[e;GmZw", 00:13:34.515 "method": "nvmf_create_subsystem", 00:13:34.515 "req_id": 1 00:13:34.515 } 00:13:34.515 Got JSON-RPC error response 00:13:34.515 response: 00:13:34.515 { 00:13:34.515 "code": -32602, 00:13:34.515 "message": "Invalid SN H+vh)o='\''x<-?Nw[e;GmZw" 00:13:34.515 }' 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:34.515 { 00:13:34.515 "nqn": "nqn.2016-06.io.spdk:cnode28817", 00:13:34.515 "serial_number": "H+vh)o='x<-?Nw[e;GmZw", 00:13:34.515 "method": "nvmf_create_subsystem", 00:13:34.515 "req_id": 1 00:13:34.515 } 00:13:34.515 Got JSON-RPC error response 00:13:34.515 response: 00:13:34.515 { 00:13:34.515 "code": -32602, 00:13:34.515 "message": "Invalid SN H+vh)o='x<-?Nw[e;GmZw" 00:13:34.515 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.515 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:34.516 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:34.778 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7' 00:13:34.779 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7' nqn.2016-06.io.spdk:cnode27030 00:13:35.040 [2024-12-09 11:28:27.013834] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27030: invalid model number '~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7' 00:13:35.040 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:35.040 { 00:13:35.040 "nqn": "nqn.2016-06.io.spdk:cnode27030", 00:13:35.040 "model_number": "~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7", 00:13:35.040 "method": "nvmf_create_subsystem", 00:13:35.040 "req_id": 1 00:13:35.040 } 00:13:35.040 Got JSON-RPC error response 00:13:35.040 response: 00:13:35.040 { 00:13:35.040 "code": -32602, 00:13:35.040 "message": "Invalid MN ~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7" 00:13:35.040 }' 00:13:35.040 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:35.040 { 00:13:35.040 "nqn": "nqn.2016-06.io.spdk:cnode27030", 00:13:35.040 "model_number": "~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7", 00:13:35.040 "method": "nvmf_create_subsystem", 00:13:35.040 "req_id": 1 00:13:35.040 } 00:13:35.040 Got JSON-RPC error response 00:13:35.040 response: 00:13:35.040 { 00:13:35.040 "code": -32602, 00:13:35.040 "message": "Invalid MN ~hvcQ?4o35v*#1zmVLG3Mx*mlI91g&CMb2i!m>tH7" 00:13:35.040 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:35.040 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:35.040 [2024-12-09 11:28:27.194519] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.301 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:35.301 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:35.302 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:35.302 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:35.302 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:35.302 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:35.563 [2024-12-09 11:28:27.560659] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:35.563 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:35.563 { 00:13:35.563 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:35.563 "listen_address": { 00:13:35.563 "trtype": "tcp", 00:13:35.563 "traddr": "", 00:13:35.563 "trsvcid": "4421" 00:13:35.563 }, 00:13:35.563 "method": "nvmf_subsystem_remove_listener", 00:13:35.563 "req_id": 1 00:13:35.563 } 00:13:35.563 Got JSON-RPC error response 00:13:35.563 response: 00:13:35.563 { 00:13:35.563 "code": -32602, 00:13:35.563 "message": "Invalid parameters" 00:13:35.563 }' 00:13:35.563 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:35.563 { 00:13:35.563 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:35.563 "listen_address": { 00:13:35.563 "trtype": "tcp", 00:13:35.563 "traddr": "", 00:13:35.563 "trsvcid": "4421" 00:13:35.563 }, 00:13:35.563 "method": "nvmf_subsystem_remove_listener", 00:13:35.563 "req_id": 1 00:13:35.563 } 00:13:35.563 Got JSON-RPC error response 00:13:35.563 response: 00:13:35.563 { 00:13:35.563 "code": -32602, 00:13:35.563 "message": "Invalid parameters" 00:13:35.563 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:35.563 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26124 -i 0 00:13:35.825 [2024-12-09 11:28:27.737187] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26124: invalid cntlid range [0-65519] 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:35.825 { 00:13:35.825 "nqn": "nqn.2016-06.io.spdk:cnode26124", 00:13:35.825 "min_cntlid": 0, 00:13:35.825 "method": "nvmf_create_subsystem", 00:13:35.825 "req_id": 1 00:13:35.825 } 00:13:35.825 Got JSON-RPC error response 00:13:35.825 response: 00:13:35.825 { 00:13:35.825 "code": -32602, 00:13:35.825 "message": "Invalid cntlid range [0-65519]" 00:13:35.825 }' 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:35.825 { 00:13:35.825 "nqn": "nqn.2016-06.io.spdk:cnode26124", 00:13:35.825 "min_cntlid": 0, 00:13:35.825 "method": "nvmf_create_subsystem", 00:13:35.825 "req_id": 1 00:13:35.825 } 00:13:35.825 Got JSON-RPC error response 00:13:35.825 response: 00:13:35.825 { 00:13:35.825 "code": -32602, 00:13:35.825 "message": "Invalid cntlid range [0-65519]" 00:13:35.825 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15924 -i 65520 00:13:35.825 [2024-12-09 11:28:27.913744] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15924: invalid cntlid range [65520-65519] 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:35.825 { 00:13:35.825 "nqn": "nqn.2016-06.io.spdk:cnode15924", 00:13:35.825 "min_cntlid": 65520, 00:13:35.825 "method": "nvmf_create_subsystem", 00:13:35.825 "req_id": 1 00:13:35.825 } 00:13:35.825 Got JSON-RPC error response 00:13:35.825 response: 00:13:35.825 { 00:13:35.825 "code": -32602, 00:13:35.825 "message": "Invalid cntlid range [65520-65519]" 00:13:35.825 }' 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:35.825 { 00:13:35.825 "nqn": "nqn.2016-06.io.spdk:cnode15924", 00:13:35.825 "min_cntlid": 65520, 00:13:35.825 "method": "nvmf_create_subsystem", 00:13:35.825 "req_id": 1 00:13:35.825 } 00:13:35.825 Got JSON-RPC error response 00:13:35.825 response: 00:13:35.825 { 00:13:35.825 "code": -32602, 00:13:35.825 "message": "Invalid cntlid range [65520-65519]" 00:13:35.825 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.825 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30508 -I 0 00:13:36.087 [2024-12-09 11:28:28.090316] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30508: invalid cntlid range [1-0] 00:13:36.087 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:36.087 { 00:13:36.087 "nqn": "nqn.2016-06.io.spdk:cnode30508", 00:13:36.087 "max_cntlid": 0, 00:13:36.087 "method": "nvmf_create_subsystem", 00:13:36.087 "req_id": 1 00:13:36.087 } 00:13:36.087 Got JSON-RPC error response 00:13:36.087 response: 00:13:36.087 { 00:13:36.087 "code": -32602, 00:13:36.087 "message": "Invalid cntlid range [1-0]" 00:13:36.087 }' 00:13:36.087 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:36.087 { 00:13:36.087 "nqn": "nqn.2016-06.io.spdk:cnode30508", 00:13:36.087 "max_cntlid": 0, 00:13:36.087 "method": "nvmf_create_subsystem", 00:13:36.087 "req_id": 1 00:13:36.087 } 00:13:36.087 Got JSON-RPC error response 00:13:36.087 response: 00:13:36.087 { 00:13:36.087 "code": -32602, 00:13:36.087 "message": "Invalid cntlid range [1-0]" 00:13:36.087 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.087 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29883 -I 65520 00:13:36.348 [2024-12-09 11:28:28.270873] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29883: invalid cntlid range [1-65520] 00:13:36.348 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:36.348 { 00:13:36.348 "nqn": "nqn.2016-06.io.spdk:cnode29883", 00:13:36.348 "max_cntlid": 65520, 00:13:36.348 "method": "nvmf_create_subsystem", 00:13:36.348 "req_id": 1 00:13:36.348 } 00:13:36.348 Got JSON-RPC error response 00:13:36.348 response: 00:13:36.348 { 00:13:36.348 "code": -32602, 00:13:36.348 "message": "Invalid cntlid range [1-65520]" 00:13:36.348 }' 00:13:36.348 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:36.348 { 00:13:36.348 "nqn": "nqn.2016-06.io.spdk:cnode29883", 00:13:36.348 "max_cntlid": 65520, 00:13:36.348 "method": "nvmf_create_subsystem", 00:13:36.348 "req_id": 1 00:13:36.348 } 00:13:36.348 Got JSON-RPC error response 00:13:36.348 response: 00:13:36.348 { 00:13:36.348 "code": -32602, 00:13:36.348 "message": "Invalid cntlid range [1-65520]" 00:13:36.348 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.349 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25719 -i 6 -I 5 00:13:36.349 [2024-12-09 11:28:28.459451] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25719: invalid cntlid range [6-5] 00:13:36.349 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:36.349 { 00:13:36.349 "nqn": "nqn.2016-06.io.spdk:cnode25719", 00:13:36.349 "min_cntlid": 6, 00:13:36.349 "max_cntlid": 5, 00:13:36.349 "method": "nvmf_create_subsystem", 00:13:36.349 "req_id": 1 00:13:36.349 } 00:13:36.349 Got JSON-RPC error response 00:13:36.349 response: 00:13:36.349 { 00:13:36.349 "code": -32602, 00:13:36.349 "message": "Invalid cntlid range [6-5]" 00:13:36.349 }' 00:13:36.349 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:36.349 { 00:13:36.349 "nqn": "nqn.2016-06.io.spdk:cnode25719", 00:13:36.349 "min_cntlid": 6, 00:13:36.349 "max_cntlid": 5, 00:13:36.349 "method": "nvmf_create_subsystem", 00:13:36.349 "req_id": 1 00:13:36.349 } 00:13:36.349 Got JSON-RPC error response 00:13:36.349 response: 00:13:36.349 { 00:13:36.349 "code": -32602, 00:13:36.349 "message": "Invalid cntlid range [6-5]" 00:13:36.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.349 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:36.611 { 00:13:36.611 "name": "foobar", 00:13:36.611 "method": "nvmf_delete_target", 00:13:36.611 "req_id": 1 00:13:36.611 } 00:13:36.611 Got JSON-RPC error response 00:13:36.611 response: 00:13:36.611 { 00:13:36.611 "code": -32602, 00:13:36.611 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:36.611 }' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:36.611 { 00:13:36.611 "name": "foobar", 00:13:36.611 "method": "nvmf_delete_target", 00:13:36.611 "req_id": 1 00:13:36.611 } 00:13:36.611 Got JSON-RPC error response 00:13:36.611 response: 00:13:36.611 { 00:13:36.611 "code": -32602, 00:13:36.611 "message": "The specified target doesn't exist, cannot delete it." 00:13:36.611 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.611 rmmod nvme_tcp 00:13:36.611 rmmod nvme_fabrics 00:13:36.611 rmmod nvme_keyring 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3447983 ']' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3447983 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3447983 ']' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3447983 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447983 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447983' 00:13:36.611 killing process with pid 3447983 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3447983 00:13:36.611 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3447983 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.872 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.798 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:38.798 00:13:38.798 real 0m14.039s 00:13:38.798 user 0m20.348s 00:13:38.798 sys 0m6.574s 00:13:38.798 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.798 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:38.798 ************************************ 00:13:38.798 END TEST nvmf_invalid 00:13:38.798 ************************************ 00:13:39.059 11:28:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.059 11:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.059 11:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.059 11:28:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.059 ************************************ 00:13:39.059 START TEST nvmf_connect_stress 00:13:39.059 ************************************ 00:13:39.059 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.059 * Looking for test storage... 00:13:39.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.059 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.059 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.060 --rc genhtml_branch_coverage=1 00:13:39.060 --rc genhtml_function_coverage=1 00:13:39.060 --rc genhtml_legend=1 00:13:39.060 --rc geninfo_all_blocks=1 00:13:39.060 --rc geninfo_unexecuted_blocks=1 00:13:39.060 00:13:39.060 ' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.060 --rc genhtml_branch_coverage=1 00:13:39.060 --rc genhtml_function_coverage=1 00:13:39.060 --rc genhtml_legend=1 00:13:39.060 --rc geninfo_all_blocks=1 00:13:39.060 --rc geninfo_unexecuted_blocks=1 00:13:39.060 00:13:39.060 ' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.060 --rc genhtml_branch_coverage=1 00:13:39.060 --rc genhtml_function_coverage=1 00:13:39.060 --rc genhtml_legend=1 00:13:39.060 --rc geninfo_all_blocks=1 00:13:39.060 --rc geninfo_unexecuted_blocks=1 00:13:39.060 00:13:39.060 ' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.060 --rc genhtml_branch_coverage=1 00:13:39.060 --rc genhtml_function_coverage=1 00:13:39.060 --rc genhtml_legend=1 00:13:39.060 --rc geninfo_all_blocks=1 00:13:39.060 --rc geninfo_unexecuted_blocks=1 00:13:39.060 00:13:39.060 ' 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.060 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.322 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.323 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:47.474 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:47.475 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:47.475 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:47.475 Found net devices under 0000:31:00.0: cvl_0_0 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:47.475 Found net devices under 0000:31:00.1: cvl_0_1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:47.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:13:47.475 00:13:47.475 --- 10.0.0.2 ping statistics --- 00:13:47.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.475 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:13:47.475 00:13:47.475 --- 10.0.0.1 ping statistics --- 00:13:47.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.475 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.475 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3453214 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3453214 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3453214 ']' 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.476 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.476 [2024-12-09 11:28:38.815857] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:13:47.476 [2024-12-09 11:28:38.815939] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.476 [2024-12-09 11:28:38.905818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.476 [2024-12-09 11:28:38.957399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.476 [2024-12-09 11:28:38.957457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.476 [2024-12-09 11:28:38.957466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.476 [2024-12-09 11:28:38.957473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.476 [2024-12-09 11:28:38.957480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.476 [2024-12-09 11:28:38.959381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.476 [2024-12-09 11:28:38.959545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.476 [2024-12-09 11:28:38.959546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.476 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.476 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:47.476 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.476 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.476 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 [2024-12-09 11:28:39.664492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 [2024-12-09 11:28:39.688876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 NULL1 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3453515 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:47.738 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.739 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.001 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.001 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:48.001 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.001 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.001 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.574 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.574 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:48.574 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.574 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.574 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.836 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.836 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:48.836 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.836 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.836 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.097 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.097 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:49.097 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.097 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.097 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.359 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.359 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:49.359 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.359 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.359 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.620 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.620 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:49.620 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.620 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.620 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:50.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.192 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.453 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.453 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:50.453 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.453 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.453 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.714 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.714 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:50.714 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.714 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.714 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.975 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.975 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:50.975 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.975 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.975 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.237 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.237 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:51.237 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.237 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.237 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.808 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.808 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:51.808 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.808 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.808 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.069 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.069 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:52.069 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.069 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.069 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.329 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.329 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:52.329 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.329 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.329 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.590 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.590 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:52.590 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.590 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.590 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.850 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.112 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:53.112 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.112 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.112 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.372 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.372 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:53.372 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.372 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.372 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.632 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.632 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:53.632 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.632 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.632 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.892 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.892 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:53.892 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.892 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.892 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.153 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.153 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:54.153 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.153 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.153 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.724 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.724 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:54.724 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.724 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.724 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:54.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.245 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.245 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:55.245 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.245 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.245 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.505 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:55.505 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.505 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.505 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.076 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.076 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:56.076 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.076 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.076 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.337 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.337 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:56.337 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.337 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.337 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.597 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.597 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:56.597 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.597 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.597 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.858 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.858 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:56.858 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.858 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.858 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.118 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.118 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:57.118 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.118 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.118 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.690 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.690 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:57.690 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.690 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.690 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.690 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3453515 00:13:57.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3453515) - No such process 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3453515 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.951 rmmod nvme_tcp 00:13:57.951 rmmod nvme_fabrics 00:13:57.951 rmmod nvme_keyring 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3453214 ']' 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3453214 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3453214 ']' 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3453214 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.951 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3453214 00:13:57.951 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:57.951 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:57.951 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3453214' 00:13:57.951 killing process with pid 3453214 00:13:57.951 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3453214 00:13:57.951 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3453214 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.212 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:00.124 00:14:00.124 real 0m21.221s 00:14:00.124 user 0m42.181s 00:14:00.124 sys 0m9.195s 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 ************************************ 00:14:00.124 END TEST nvmf_connect_stress 00:14:00.124 ************************************ 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.124 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.386 ************************************ 00:14:00.386 START TEST nvmf_fused_ordering 00:14:00.386 ************************************ 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:00.386 * Looking for test storage... 00:14:00.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.386 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:00.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.387 --rc genhtml_branch_coverage=1 00:14:00.387 --rc genhtml_function_coverage=1 00:14:00.387 --rc genhtml_legend=1 00:14:00.387 --rc geninfo_all_blocks=1 00:14:00.387 --rc geninfo_unexecuted_blocks=1 00:14:00.387 00:14:00.387 ' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:00.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.387 --rc genhtml_branch_coverage=1 00:14:00.387 --rc genhtml_function_coverage=1 00:14:00.387 --rc genhtml_legend=1 00:14:00.387 --rc geninfo_all_blocks=1 00:14:00.387 --rc geninfo_unexecuted_blocks=1 00:14:00.387 00:14:00.387 ' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:00.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.387 --rc genhtml_branch_coverage=1 00:14:00.387 --rc genhtml_function_coverage=1 00:14:00.387 --rc genhtml_legend=1 00:14:00.387 --rc geninfo_all_blocks=1 00:14:00.387 --rc geninfo_unexecuted_blocks=1 00:14:00.387 00:14:00.387 ' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:00.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.387 --rc genhtml_branch_coverage=1 00:14:00.387 --rc genhtml_function_coverage=1 00:14:00.387 --rc genhtml_legend=1 00:14:00.387 --rc geninfo_all_blocks=1 00:14:00.387 --rc geninfo_unexecuted_blocks=1 00:14:00.387 00:14:00.387 ' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.387 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.530 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:08.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:08.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:08.531 Found net devices under 0000:31:00.0: cvl_0_0 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:08.531 Found net devices under 0000:31:00.1: cvl_0_1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:08.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:14:08.531 00:14:08.531 --- 10.0.0.2 ping statistics --- 00:14:08.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.531 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:14:08.531 00:14:08.531 --- 10.0.0.1 ping statistics --- 00:14:08.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.531 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3459622 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3459622 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3459622 ']' 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.531 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.531 [2024-12-09 11:28:59.615268] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:08.531 [2024-12-09 11:28:59.615336] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.531 [2024-12-09 11:28:59.714927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.532 [2024-12-09 11:28:59.765458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.532 [2024-12-09 11:28:59.765511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.532 [2024-12-09 11:28:59.765520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.532 [2024-12-09 11:28:59.765527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.532 [2024-12-09 11:28:59.765533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.532 [2024-12-09 11:28:59.766377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 [2024-12-09 11:29:00.481299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 [2024-12-09 11:29:00.497535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 NULL1 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:08.532 [2024-12-09 11:29:00.556177] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:08.532 [2024-12-09 11:29:00.556220] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459953 ] 00:14:09.104 Attached to nqn.2016-06.io.spdk:cnode1 00:14:09.104 Namespace ID: 1 size: 1GB 00:14:09.104 fused_ordering(0) 00:14:09.104 fused_ordering(1) 00:14:09.104 fused_ordering(2) 00:14:09.104 fused_ordering(3) 00:14:09.104 fused_ordering(4) 00:14:09.104 fused_ordering(5) 00:14:09.104 fused_ordering(6) 00:14:09.104 fused_ordering(7) 00:14:09.104 fused_ordering(8) 00:14:09.104 fused_ordering(9) 00:14:09.104 fused_ordering(10) 00:14:09.104 fused_ordering(11) 00:14:09.104 fused_ordering(12) 00:14:09.104 fused_ordering(13) 00:14:09.104 fused_ordering(14) 00:14:09.104 fused_ordering(15) 00:14:09.104 fused_ordering(16) 00:14:09.104 fused_ordering(17) 00:14:09.104 fused_ordering(18) 00:14:09.104 fused_ordering(19) 00:14:09.104 fused_ordering(20) 00:14:09.104 fused_ordering(21) 00:14:09.104 fused_ordering(22) 00:14:09.104 fused_ordering(23) 00:14:09.104 fused_ordering(24) 00:14:09.104 fused_ordering(25) 00:14:09.104 fused_ordering(26) 00:14:09.104 fused_ordering(27) 00:14:09.104 fused_ordering(28) 00:14:09.104 fused_ordering(29) 00:14:09.104 fused_ordering(30) 00:14:09.104 fused_ordering(31) 00:14:09.104 fused_ordering(32) 00:14:09.104 fused_ordering(33) 00:14:09.104 fused_ordering(34) 00:14:09.104 fused_ordering(35) 00:14:09.104 fused_ordering(36) 00:14:09.104 fused_ordering(37) 00:14:09.104 fused_ordering(38) 00:14:09.104 fused_ordering(39) 00:14:09.104 fused_ordering(40) 00:14:09.104 fused_ordering(41) 00:14:09.104 fused_ordering(42) 00:14:09.104 fused_ordering(43) 00:14:09.104 fused_ordering(44) 00:14:09.104 fused_ordering(45) 00:14:09.104 fused_ordering(46) 00:14:09.104 fused_ordering(47) 00:14:09.104 fused_ordering(48) 00:14:09.104 fused_ordering(49) 00:14:09.104 fused_ordering(50) 00:14:09.104 fused_ordering(51) 00:14:09.104 fused_ordering(52) 00:14:09.104 fused_ordering(53) 00:14:09.104 fused_ordering(54) 00:14:09.104 fused_ordering(55) 00:14:09.104 fused_ordering(56) 00:14:09.104 fused_ordering(57) 00:14:09.104 fused_ordering(58) 00:14:09.104 fused_ordering(59) 00:14:09.104 fused_ordering(60) 00:14:09.104 fused_ordering(61) 00:14:09.104 fused_ordering(62) 00:14:09.104 fused_ordering(63) 00:14:09.104 fused_ordering(64) 00:14:09.104 fused_ordering(65) 00:14:09.104 fused_ordering(66) 00:14:09.104 fused_ordering(67) 00:14:09.104 fused_ordering(68) 00:14:09.104 fused_ordering(69) 00:14:09.104 fused_ordering(70) 00:14:09.104 fused_ordering(71) 00:14:09.104 fused_ordering(72) 00:14:09.104 fused_ordering(73) 00:14:09.104 fused_ordering(74) 00:14:09.104 fused_ordering(75) 00:14:09.104 fused_ordering(76) 00:14:09.104 fused_ordering(77) 00:14:09.104 fused_ordering(78) 00:14:09.104 fused_ordering(79) 00:14:09.104 fused_ordering(80) 00:14:09.104 fused_ordering(81) 00:14:09.104 fused_ordering(82) 00:14:09.104 fused_ordering(83) 00:14:09.104 fused_ordering(84) 00:14:09.104 fused_ordering(85) 00:14:09.104 fused_ordering(86) 00:14:09.104 fused_ordering(87) 00:14:09.104 fused_ordering(88) 00:14:09.104 fused_ordering(89) 00:14:09.104 fused_ordering(90) 00:14:09.104 fused_ordering(91) 00:14:09.104 fused_ordering(92) 00:14:09.104 fused_ordering(93) 00:14:09.104 fused_ordering(94) 00:14:09.104 fused_ordering(95) 00:14:09.104 fused_ordering(96) 00:14:09.104 fused_ordering(97) 00:14:09.104 fused_ordering(98) 00:14:09.104 fused_ordering(99) 00:14:09.104 fused_ordering(100) 00:14:09.104 fused_ordering(101) 00:14:09.104 fused_ordering(102) 00:14:09.104 fused_ordering(103) 00:14:09.104 fused_ordering(104) 00:14:09.104 fused_ordering(105) 00:14:09.104 fused_ordering(106) 00:14:09.104 fused_ordering(107) 00:14:09.104 fused_ordering(108) 00:14:09.104 fused_ordering(109) 00:14:09.104 fused_ordering(110) 00:14:09.104 fused_ordering(111) 00:14:09.104 fused_ordering(112) 00:14:09.104 fused_ordering(113) 00:14:09.104 fused_ordering(114) 00:14:09.104 fused_ordering(115) 00:14:09.104 fused_ordering(116) 00:14:09.104 fused_ordering(117) 00:14:09.104 fused_ordering(118) 00:14:09.104 fused_ordering(119) 00:14:09.104 fused_ordering(120) 00:14:09.104 fused_ordering(121) 00:14:09.104 fused_ordering(122) 00:14:09.104 fused_ordering(123) 00:14:09.104 fused_ordering(124) 00:14:09.104 fused_ordering(125) 00:14:09.104 fused_ordering(126) 00:14:09.104 fused_ordering(127) 00:14:09.104 fused_ordering(128) 00:14:09.104 fused_ordering(129) 00:14:09.104 fused_ordering(130) 00:14:09.104 fused_ordering(131) 00:14:09.104 fused_ordering(132) 00:14:09.104 fused_ordering(133) 00:14:09.104 fused_ordering(134) 00:14:09.104 fused_ordering(135) 00:14:09.104 fused_ordering(136) 00:14:09.104 fused_ordering(137) 00:14:09.104 fused_ordering(138) 00:14:09.104 fused_ordering(139) 00:14:09.104 fused_ordering(140) 00:14:09.104 fused_ordering(141) 00:14:09.104 fused_ordering(142) 00:14:09.104 fused_ordering(143) 00:14:09.104 fused_ordering(144) 00:14:09.104 fused_ordering(145) 00:14:09.104 fused_ordering(146) 00:14:09.104 fused_ordering(147) 00:14:09.104 fused_ordering(148) 00:14:09.104 fused_ordering(149) 00:14:09.104 fused_ordering(150) 00:14:09.104 fused_ordering(151) 00:14:09.104 fused_ordering(152) 00:14:09.104 fused_ordering(153) 00:14:09.104 fused_ordering(154) 00:14:09.104 fused_ordering(155) 00:14:09.104 fused_ordering(156) 00:14:09.104 fused_ordering(157) 00:14:09.104 fused_ordering(158) 00:14:09.104 fused_ordering(159) 00:14:09.104 fused_ordering(160) 00:14:09.104 fused_ordering(161) 00:14:09.104 fused_ordering(162) 00:14:09.104 fused_ordering(163) 00:14:09.104 fused_ordering(164) 00:14:09.104 fused_ordering(165) 00:14:09.104 fused_ordering(166) 00:14:09.104 fused_ordering(167) 00:14:09.104 fused_ordering(168) 00:14:09.104 fused_ordering(169) 00:14:09.104 fused_ordering(170) 00:14:09.104 fused_ordering(171) 00:14:09.104 fused_ordering(172) 00:14:09.104 fused_ordering(173) 00:14:09.104 fused_ordering(174) 00:14:09.104 fused_ordering(175) 00:14:09.104 fused_ordering(176) 00:14:09.104 fused_ordering(177) 00:14:09.104 fused_ordering(178) 00:14:09.104 fused_ordering(179) 00:14:09.104 fused_ordering(180) 00:14:09.104 fused_ordering(181) 00:14:09.104 fused_ordering(182) 00:14:09.104 fused_ordering(183) 00:14:09.104 fused_ordering(184) 00:14:09.104 fused_ordering(185) 00:14:09.104 fused_ordering(186) 00:14:09.104 fused_ordering(187) 00:14:09.104 fused_ordering(188) 00:14:09.104 fused_ordering(189) 00:14:09.104 fused_ordering(190) 00:14:09.104 fused_ordering(191) 00:14:09.104 fused_ordering(192) 00:14:09.104 fused_ordering(193) 00:14:09.104 fused_ordering(194) 00:14:09.104 fused_ordering(195) 00:14:09.104 fused_ordering(196) 00:14:09.104 fused_ordering(197) 00:14:09.104 fused_ordering(198) 00:14:09.104 fused_ordering(199) 00:14:09.104 fused_ordering(200) 00:14:09.104 fused_ordering(201) 00:14:09.104 fused_ordering(202) 00:14:09.104 fused_ordering(203) 00:14:09.104 fused_ordering(204) 00:14:09.104 fused_ordering(205) 00:14:09.104 fused_ordering(206) 00:14:09.104 fused_ordering(207) 00:14:09.104 fused_ordering(208) 00:14:09.104 fused_ordering(209) 00:14:09.104 fused_ordering(210) 00:14:09.104 fused_ordering(211) 00:14:09.104 fused_ordering(212) 00:14:09.104 fused_ordering(213) 00:14:09.104 fused_ordering(214) 00:14:09.104 fused_ordering(215) 00:14:09.104 fused_ordering(216) 00:14:09.104 fused_ordering(217) 00:14:09.104 fused_ordering(218) 00:14:09.104 fused_ordering(219) 00:14:09.104 fused_ordering(220) 00:14:09.104 fused_ordering(221) 00:14:09.104 fused_ordering(222) 00:14:09.104 fused_ordering(223) 00:14:09.104 fused_ordering(224) 00:14:09.104 fused_ordering(225) 00:14:09.104 fused_ordering(226) 00:14:09.104 fused_ordering(227) 00:14:09.104 fused_ordering(228) 00:14:09.104 fused_ordering(229) 00:14:09.104 fused_ordering(230) 00:14:09.104 fused_ordering(231) 00:14:09.104 fused_ordering(232) 00:14:09.104 fused_ordering(233) 00:14:09.104 fused_ordering(234) 00:14:09.104 fused_ordering(235) 00:14:09.104 fused_ordering(236) 00:14:09.104 fused_ordering(237) 00:14:09.104 fused_ordering(238) 00:14:09.104 fused_ordering(239) 00:14:09.104 fused_ordering(240) 00:14:09.104 fused_ordering(241) 00:14:09.104 fused_ordering(242) 00:14:09.104 fused_ordering(243) 00:14:09.104 fused_ordering(244) 00:14:09.104 fused_ordering(245) 00:14:09.105 fused_ordering(246) 00:14:09.105 fused_ordering(247) 00:14:09.105 fused_ordering(248) 00:14:09.105 fused_ordering(249) 00:14:09.105 fused_ordering(250) 00:14:09.105 fused_ordering(251) 00:14:09.105 fused_ordering(252) 00:14:09.105 fused_ordering(253) 00:14:09.105 fused_ordering(254) 00:14:09.105 fused_ordering(255) 00:14:09.105 fused_ordering(256) 00:14:09.105 fused_ordering(257) 00:14:09.105 fused_ordering(258) 00:14:09.105 fused_ordering(259) 00:14:09.105 fused_ordering(260) 00:14:09.105 fused_ordering(261) 00:14:09.105 fused_ordering(262) 00:14:09.105 fused_ordering(263) 00:14:09.105 fused_ordering(264) 00:14:09.105 fused_ordering(265) 00:14:09.105 fused_ordering(266) 00:14:09.105 fused_ordering(267) 00:14:09.105 fused_ordering(268) 00:14:09.105 fused_ordering(269) 00:14:09.105 fused_ordering(270) 00:14:09.105 fused_ordering(271) 00:14:09.105 fused_ordering(272) 00:14:09.105 fused_ordering(273) 00:14:09.105 fused_ordering(274) 00:14:09.105 fused_ordering(275) 00:14:09.105 fused_ordering(276) 00:14:09.105 fused_ordering(277) 00:14:09.105 fused_ordering(278) 00:14:09.105 fused_ordering(279) 00:14:09.105 fused_ordering(280) 00:14:09.105 fused_ordering(281) 00:14:09.105 fused_ordering(282) 00:14:09.105 fused_ordering(283) 00:14:09.105 fused_ordering(284) 00:14:09.105 fused_ordering(285) 00:14:09.105 fused_ordering(286) 00:14:09.105 fused_ordering(287) 00:14:09.105 fused_ordering(288) 00:14:09.105 fused_ordering(289) 00:14:09.105 fused_ordering(290) 00:14:09.105 fused_ordering(291) 00:14:09.105 fused_ordering(292) 00:14:09.105 fused_ordering(293) 00:14:09.105 fused_ordering(294) 00:14:09.105 fused_ordering(295) 00:14:09.105 fused_ordering(296) 00:14:09.105 fused_ordering(297) 00:14:09.105 fused_ordering(298) 00:14:09.105 fused_ordering(299) 00:14:09.105 fused_ordering(300) 00:14:09.105 fused_ordering(301) 00:14:09.105 fused_ordering(302) 00:14:09.105 fused_ordering(303) 00:14:09.105 fused_ordering(304) 00:14:09.105 fused_ordering(305) 00:14:09.105 fused_ordering(306) 00:14:09.105 fused_ordering(307) 00:14:09.105 fused_ordering(308) 00:14:09.105 fused_ordering(309) 00:14:09.105 fused_ordering(310) 00:14:09.105 fused_ordering(311) 00:14:09.105 fused_ordering(312) 00:14:09.105 fused_ordering(313) 00:14:09.105 fused_ordering(314) 00:14:09.105 fused_ordering(315) 00:14:09.105 fused_ordering(316) 00:14:09.105 fused_ordering(317) 00:14:09.105 fused_ordering(318) 00:14:09.105 fused_ordering(319) 00:14:09.105 fused_ordering(320) 00:14:09.105 fused_ordering(321) 00:14:09.105 fused_ordering(322) 00:14:09.105 fused_ordering(323) 00:14:09.105 fused_ordering(324) 00:14:09.105 fused_ordering(325) 00:14:09.105 fused_ordering(326) 00:14:09.105 fused_ordering(327) 00:14:09.105 fused_ordering(328) 00:14:09.105 fused_ordering(329) 00:14:09.105 fused_ordering(330) 00:14:09.105 fused_ordering(331) 00:14:09.105 fused_ordering(332) 00:14:09.105 fused_ordering(333) 00:14:09.105 fused_ordering(334) 00:14:09.105 fused_ordering(335) 00:14:09.105 fused_ordering(336) 00:14:09.105 fused_ordering(337) 00:14:09.105 fused_ordering(338) 00:14:09.105 fused_ordering(339) 00:14:09.105 fused_ordering(340) 00:14:09.105 fused_ordering(341) 00:14:09.105 fused_ordering(342) 00:14:09.105 fused_ordering(343) 00:14:09.105 fused_ordering(344) 00:14:09.105 fused_ordering(345) 00:14:09.105 fused_ordering(346) 00:14:09.105 fused_ordering(347) 00:14:09.105 fused_ordering(348) 00:14:09.105 fused_ordering(349) 00:14:09.105 fused_ordering(350) 00:14:09.105 fused_ordering(351) 00:14:09.105 fused_ordering(352) 00:14:09.105 fused_ordering(353) 00:14:09.105 fused_ordering(354) 00:14:09.105 fused_ordering(355) 00:14:09.105 fused_ordering(356) 00:14:09.105 fused_ordering(357) 00:14:09.105 fused_ordering(358) 00:14:09.105 fused_ordering(359) 00:14:09.105 fused_ordering(360) 00:14:09.105 fused_ordering(361) 00:14:09.105 fused_ordering(362) 00:14:09.105 fused_ordering(363) 00:14:09.105 fused_ordering(364) 00:14:09.105 fused_ordering(365) 00:14:09.105 fused_ordering(366) 00:14:09.105 fused_ordering(367) 00:14:09.105 fused_ordering(368) 00:14:09.105 fused_ordering(369) 00:14:09.105 fused_ordering(370) 00:14:09.105 fused_ordering(371) 00:14:09.105 fused_ordering(372) 00:14:09.105 fused_ordering(373) 00:14:09.105 fused_ordering(374) 00:14:09.105 fused_ordering(375) 00:14:09.105 fused_ordering(376) 00:14:09.105 fused_ordering(377) 00:14:09.105 fused_ordering(378) 00:14:09.105 fused_ordering(379) 00:14:09.105 fused_ordering(380) 00:14:09.105 fused_ordering(381) 00:14:09.105 fused_ordering(382) 00:14:09.105 fused_ordering(383) 00:14:09.105 fused_ordering(384) 00:14:09.105 fused_ordering(385) 00:14:09.105 fused_ordering(386) 00:14:09.105 fused_ordering(387) 00:14:09.105 fused_ordering(388) 00:14:09.105 fused_ordering(389) 00:14:09.105 fused_ordering(390) 00:14:09.105 fused_ordering(391) 00:14:09.105 fused_ordering(392) 00:14:09.105 fused_ordering(393) 00:14:09.105 fused_ordering(394) 00:14:09.105 fused_ordering(395) 00:14:09.105 fused_ordering(396) 00:14:09.105 fused_ordering(397) 00:14:09.105 fused_ordering(398) 00:14:09.105 fused_ordering(399) 00:14:09.105 fused_ordering(400) 00:14:09.105 fused_ordering(401) 00:14:09.105 fused_ordering(402) 00:14:09.105 fused_ordering(403) 00:14:09.105 fused_ordering(404) 00:14:09.105 fused_ordering(405) 00:14:09.105 fused_ordering(406) 00:14:09.105 fused_ordering(407) 00:14:09.105 fused_ordering(408) 00:14:09.105 fused_ordering(409) 00:14:09.105 fused_ordering(410) 00:14:09.676 fused_ordering(411) 00:14:09.676 fused_ordering(412) 00:14:09.676 fused_ordering(413) 00:14:09.677 fused_ordering(414) 00:14:09.677 fused_ordering(415) 00:14:09.677 fused_ordering(416) 00:14:09.677 fused_ordering(417) 00:14:09.677 fused_ordering(418) 00:14:09.677 fused_ordering(419) 00:14:09.677 fused_ordering(420) 00:14:09.677 fused_ordering(421) 00:14:09.677 fused_ordering(422) 00:14:09.677 fused_ordering(423) 00:14:09.677 fused_ordering(424) 00:14:09.677 fused_ordering(425) 00:14:09.677 fused_ordering(426) 00:14:09.677 fused_ordering(427) 00:14:09.677 fused_ordering(428) 00:14:09.677 fused_ordering(429) 00:14:09.677 fused_ordering(430) 00:14:09.677 fused_ordering(431) 00:14:09.677 fused_ordering(432) 00:14:09.677 fused_ordering(433) 00:14:09.677 fused_ordering(434) 00:14:09.677 fused_ordering(435) 00:14:09.677 fused_ordering(436) 00:14:09.677 fused_ordering(437) 00:14:09.677 fused_ordering(438) 00:14:09.677 fused_ordering(439) 00:14:09.677 fused_ordering(440) 00:14:09.677 fused_ordering(441) 00:14:09.677 fused_ordering(442) 00:14:09.677 fused_ordering(443) 00:14:09.677 fused_ordering(444) 00:14:09.677 fused_ordering(445) 00:14:09.677 fused_ordering(446) 00:14:09.677 fused_ordering(447) 00:14:09.677 fused_ordering(448) 00:14:09.677 fused_ordering(449) 00:14:09.677 fused_ordering(450) 00:14:09.677 fused_ordering(451) 00:14:09.677 fused_ordering(452) 00:14:09.677 fused_ordering(453) 00:14:09.677 fused_ordering(454) 00:14:09.677 fused_ordering(455) 00:14:09.677 fused_ordering(456) 00:14:09.677 fused_ordering(457) 00:14:09.677 fused_ordering(458) 00:14:09.677 fused_ordering(459) 00:14:09.677 fused_ordering(460) 00:14:09.677 fused_ordering(461) 00:14:09.677 fused_ordering(462) 00:14:09.677 fused_ordering(463) 00:14:09.677 fused_ordering(464) 00:14:09.677 fused_ordering(465) 00:14:09.677 fused_ordering(466) 00:14:09.677 fused_ordering(467) 00:14:09.677 fused_ordering(468) 00:14:09.677 fused_ordering(469) 00:14:09.677 fused_ordering(470) 00:14:09.677 fused_ordering(471) 00:14:09.677 fused_ordering(472) 00:14:09.677 fused_ordering(473) 00:14:09.677 fused_ordering(474) 00:14:09.677 fused_ordering(475) 00:14:09.677 fused_ordering(476) 00:14:09.677 fused_ordering(477) 00:14:09.677 fused_ordering(478) 00:14:09.677 fused_ordering(479) 00:14:09.677 fused_ordering(480) 00:14:09.677 fused_ordering(481) 00:14:09.677 fused_ordering(482) 00:14:09.677 fused_ordering(483) 00:14:09.677 fused_ordering(484) 00:14:09.677 fused_ordering(485) 00:14:09.677 fused_ordering(486) 00:14:09.677 fused_ordering(487) 00:14:09.677 fused_ordering(488) 00:14:09.677 fused_ordering(489) 00:14:09.677 fused_ordering(490) 00:14:09.677 fused_ordering(491) 00:14:09.677 fused_ordering(492) 00:14:09.677 fused_ordering(493) 00:14:09.677 fused_ordering(494) 00:14:09.677 fused_ordering(495) 00:14:09.677 fused_ordering(496) 00:14:09.677 fused_ordering(497) 00:14:09.677 fused_ordering(498) 00:14:09.677 fused_ordering(499) 00:14:09.677 fused_ordering(500) 00:14:09.677 fused_ordering(501) 00:14:09.677 fused_ordering(502) 00:14:09.677 fused_ordering(503) 00:14:09.677 fused_ordering(504) 00:14:09.677 fused_ordering(505) 00:14:09.677 fused_ordering(506) 00:14:09.677 fused_ordering(507) 00:14:09.677 fused_ordering(508) 00:14:09.677 fused_ordering(509) 00:14:09.677 fused_ordering(510) 00:14:09.677 fused_ordering(511) 00:14:09.677 fused_ordering(512) 00:14:09.677 fused_ordering(513) 00:14:09.677 fused_ordering(514) 00:14:09.677 fused_ordering(515) 00:14:09.677 fused_ordering(516) 00:14:09.677 fused_ordering(517) 00:14:09.677 fused_ordering(518) 00:14:09.677 fused_ordering(519) 00:14:09.677 fused_ordering(520) 00:14:09.677 fused_ordering(521) 00:14:09.677 fused_ordering(522) 00:14:09.677 fused_ordering(523) 00:14:09.677 fused_ordering(524) 00:14:09.677 fused_ordering(525) 00:14:09.677 fused_ordering(526) 00:14:09.677 fused_ordering(527) 00:14:09.677 fused_ordering(528) 00:14:09.677 fused_ordering(529) 00:14:09.677 fused_ordering(530) 00:14:09.677 fused_ordering(531) 00:14:09.677 fused_ordering(532) 00:14:09.677 fused_ordering(533) 00:14:09.677 fused_ordering(534) 00:14:09.677 fused_ordering(535) 00:14:09.677 fused_ordering(536) 00:14:09.677 fused_ordering(537) 00:14:09.677 fused_ordering(538) 00:14:09.677 fused_ordering(539) 00:14:09.677 fused_ordering(540) 00:14:09.677 fused_ordering(541) 00:14:09.677 fused_ordering(542) 00:14:09.677 fused_ordering(543) 00:14:09.677 fused_ordering(544) 00:14:09.677 fused_ordering(545) 00:14:09.677 fused_ordering(546) 00:14:09.677 fused_ordering(547) 00:14:09.677 fused_ordering(548) 00:14:09.677 fused_ordering(549) 00:14:09.677 fused_ordering(550) 00:14:09.677 fused_ordering(551) 00:14:09.677 fused_ordering(552) 00:14:09.677 fused_ordering(553) 00:14:09.677 fused_ordering(554) 00:14:09.677 fused_ordering(555) 00:14:09.677 fused_ordering(556) 00:14:09.677 fused_ordering(557) 00:14:09.677 fused_ordering(558) 00:14:09.677 fused_ordering(559) 00:14:09.677 fused_ordering(560) 00:14:09.677 fused_ordering(561) 00:14:09.677 fused_ordering(562) 00:14:09.677 fused_ordering(563) 00:14:09.677 fused_ordering(564) 00:14:09.677 fused_ordering(565) 00:14:09.677 fused_ordering(566) 00:14:09.677 fused_ordering(567) 00:14:09.677 fused_ordering(568) 00:14:09.677 fused_ordering(569) 00:14:09.677 fused_ordering(570) 00:14:09.677 fused_ordering(571) 00:14:09.677 fused_ordering(572) 00:14:09.677 fused_ordering(573) 00:14:09.677 fused_ordering(574) 00:14:09.677 fused_ordering(575) 00:14:09.677 fused_ordering(576) 00:14:09.677 fused_ordering(577) 00:14:09.677 fused_ordering(578) 00:14:09.677 fused_ordering(579) 00:14:09.677 fused_ordering(580) 00:14:09.677 fused_ordering(581) 00:14:09.677 fused_ordering(582) 00:14:09.677 fused_ordering(583) 00:14:09.677 fused_ordering(584) 00:14:09.677 fused_ordering(585) 00:14:09.677 fused_ordering(586) 00:14:09.677 fused_ordering(587) 00:14:09.677 fused_ordering(588) 00:14:09.677 fused_ordering(589) 00:14:09.677 fused_ordering(590) 00:14:09.677 fused_ordering(591) 00:14:09.677 fused_ordering(592) 00:14:09.677 fused_ordering(593) 00:14:09.677 fused_ordering(594) 00:14:09.677 fused_ordering(595) 00:14:09.677 fused_ordering(596) 00:14:09.677 fused_ordering(597) 00:14:09.677 fused_ordering(598) 00:14:09.677 fused_ordering(599) 00:14:09.677 fused_ordering(600) 00:14:09.677 fused_ordering(601) 00:14:09.677 fused_ordering(602) 00:14:09.677 fused_ordering(603) 00:14:09.677 fused_ordering(604) 00:14:09.677 fused_ordering(605) 00:14:09.677 fused_ordering(606) 00:14:09.677 fused_ordering(607) 00:14:09.677 fused_ordering(608) 00:14:09.677 fused_ordering(609) 00:14:09.677 fused_ordering(610) 00:14:09.677 fused_ordering(611) 00:14:09.677 fused_ordering(612) 00:14:09.677 fused_ordering(613) 00:14:09.677 fused_ordering(614) 00:14:09.677 fused_ordering(615) 00:14:10.248 fused_ordering(616) 00:14:10.248 fused_ordering(617) 00:14:10.248 fused_ordering(618) 00:14:10.248 fused_ordering(619) 00:14:10.248 fused_ordering(620) 00:14:10.248 fused_ordering(621) 00:14:10.248 fused_ordering(622) 00:14:10.248 fused_ordering(623) 00:14:10.248 fused_ordering(624) 00:14:10.248 fused_ordering(625) 00:14:10.248 fused_ordering(626) 00:14:10.248 fused_ordering(627) 00:14:10.248 fused_ordering(628) 00:14:10.248 fused_ordering(629) 00:14:10.248 fused_ordering(630) 00:14:10.248 fused_ordering(631) 00:14:10.248 fused_ordering(632) 00:14:10.248 fused_ordering(633) 00:14:10.248 fused_ordering(634) 00:14:10.248 fused_ordering(635) 00:14:10.248 fused_ordering(636) 00:14:10.248 fused_ordering(637) 00:14:10.248 fused_ordering(638) 00:14:10.248 fused_ordering(639) 00:14:10.248 fused_ordering(640) 00:14:10.248 fused_ordering(641) 00:14:10.248 fused_ordering(642) 00:14:10.248 fused_ordering(643) 00:14:10.248 fused_ordering(644) 00:14:10.248 fused_ordering(645) 00:14:10.248 fused_ordering(646) 00:14:10.248 fused_ordering(647) 00:14:10.248 fused_ordering(648) 00:14:10.248 fused_ordering(649) 00:14:10.248 fused_ordering(650) 00:14:10.248 fused_ordering(651) 00:14:10.248 fused_ordering(652) 00:14:10.248 fused_ordering(653) 00:14:10.248 fused_ordering(654) 00:14:10.248 fused_ordering(655) 00:14:10.248 fused_ordering(656) 00:14:10.248 fused_ordering(657) 00:14:10.248 fused_ordering(658) 00:14:10.248 fused_ordering(659) 00:14:10.248 fused_ordering(660) 00:14:10.248 fused_ordering(661) 00:14:10.248 fused_ordering(662) 00:14:10.248 fused_ordering(663) 00:14:10.248 fused_ordering(664) 00:14:10.248 fused_ordering(665) 00:14:10.248 fused_ordering(666) 00:14:10.248 fused_ordering(667) 00:14:10.248 fused_ordering(668) 00:14:10.248 fused_ordering(669) 00:14:10.248 fused_ordering(670) 00:14:10.248 fused_ordering(671) 00:14:10.248 fused_ordering(672) 00:14:10.248 fused_ordering(673) 00:14:10.248 fused_ordering(674) 00:14:10.248 fused_ordering(675) 00:14:10.248 fused_ordering(676) 00:14:10.248 fused_ordering(677) 00:14:10.248 fused_ordering(678) 00:14:10.248 fused_ordering(679) 00:14:10.248 fused_ordering(680) 00:14:10.248 fused_ordering(681) 00:14:10.248 fused_ordering(682) 00:14:10.248 fused_ordering(683) 00:14:10.248 fused_ordering(684) 00:14:10.248 fused_ordering(685) 00:14:10.248 fused_ordering(686) 00:14:10.248 fused_ordering(687) 00:14:10.248 fused_ordering(688) 00:14:10.248 fused_ordering(689) 00:14:10.248 fused_ordering(690) 00:14:10.248 fused_ordering(691) 00:14:10.248 fused_ordering(692) 00:14:10.248 fused_ordering(693) 00:14:10.248 fused_ordering(694) 00:14:10.248 fused_ordering(695) 00:14:10.248 fused_ordering(696) 00:14:10.248 fused_ordering(697) 00:14:10.248 fused_ordering(698) 00:14:10.248 fused_ordering(699) 00:14:10.248 fused_ordering(700) 00:14:10.248 fused_ordering(701) 00:14:10.248 fused_ordering(702) 00:14:10.248 fused_ordering(703) 00:14:10.248 fused_ordering(704) 00:14:10.248 fused_ordering(705) 00:14:10.248 fused_ordering(706) 00:14:10.248 fused_ordering(707) 00:14:10.248 fused_ordering(708) 00:14:10.248 fused_ordering(709) 00:14:10.248 fused_ordering(710) 00:14:10.248 fused_ordering(711) 00:14:10.248 fused_ordering(712) 00:14:10.248 fused_ordering(713) 00:14:10.248 fused_ordering(714) 00:14:10.248 fused_ordering(715) 00:14:10.248 fused_ordering(716) 00:14:10.248 fused_ordering(717) 00:14:10.248 fused_ordering(718) 00:14:10.248 fused_ordering(719) 00:14:10.248 fused_ordering(720) 00:14:10.248 fused_ordering(721) 00:14:10.248 fused_ordering(722) 00:14:10.248 fused_ordering(723) 00:14:10.248 fused_ordering(724) 00:14:10.248 fused_ordering(725) 00:14:10.249 fused_ordering(726) 00:14:10.249 fused_ordering(727) 00:14:10.249 fused_ordering(728) 00:14:10.249 fused_ordering(729) 00:14:10.249 fused_ordering(730) 00:14:10.249 fused_ordering(731) 00:14:10.249 fused_ordering(732) 00:14:10.249 fused_ordering(733) 00:14:10.249 fused_ordering(734) 00:14:10.249 fused_ordering(735) 00:14:10.249 fused_ordering(736) 00:14:10.249 fused_ordering(737) 00:14:10.249 fused_ordering(738) 00:14:10.249 fused_ordering(739) 00:14:10.249 fused_ordering(740) 00:14:10.249 fused_ordering(741) 00:14:10.249 fused_ordering(742) 00:14:10.249 fused_ordering(743) 00:14:10.249 fused_ordering(744) 00:14:10.249 fused_ordering(745) 00:14:10.249 fused_ordering(746) 00:14:10.249 fused_ordering(747) 00:14:10.249 fused_ordering(748) 00:14:10.249 fused_ordering(749) 00:14:10.249 fused_ordering(750) 00:14:10.249 fused_ordering(751) 00:14:10.249 fused_ordering(752) 00:14:10.249 fused_ordering(753) 00:14:10.249 fused_ordering(754) 00:14:10.249 fused_ordering(755) 00:14:10.249 fused_ordering(756) 00:14:10.249 fused_ordering(757) 00:14:10.249 fused_ordering(758) 00:14:10.249 fused_ordering(759) 00:14:10.249 fused_ordering(760) 00:14:10.249 fused_ordering(761) 00:14:10.249 fused_ordering(762) 00:14:10.249 fused_ordering(763) 00:14:10.249 fused_ordering(764) 00:14:10.249 fused_ordering(765) 00:14:10.249 fused_ordering(766) 00:14:10.249 fused_ordering(767) 00:14:10.249 fused_ordering(768) 00:14:10.249 fused_ordering(769) 00:14:10.249 fused_ordering(770) 00:14:10.249 fused_ordering(771) 00:14:10.249 fused_ordering(772) 00:14:10.249 fused_ordering(773) 00:14:10.249 fused_ordering(774) 00:14:10.249 fused_ordering(775) 00:14:10.249 fused_ordering(776) 00:14:10.249 fused_ordering(777) 00:14:10.249 fused_ordering(778) 00:14:10.249 fused_ordering(779) 00:14:10.249 fused_ordering(780) 00:14:10.249 fused_ordering(781) 00:14:10.249 fused_ordering(782) 00:14:10.249 fused_ordering(783) 00:14:10.249 fused_ordering(784) 00:14:10.249 fused_ordering(785) 00:14:10.249 fused_ordering(786) 00:14:10.249 fused_ordering(787) 00:14:10.249 fused_ordering(788) 00:14:10.249 fused_ordering(789) 00:14:10.249 fused_ordering(790) 00:14:10.249 fused_ordering(791) 00:14:10.249 fused_ordering(792) 00:14:10.249 fused_ordering(793) 00:14:10.249 fused_ordering(794) 00:14:10.249 fused_ordering(795) 00:14:10.249 fused_ordering(796) 00:14:10.249 fused_ordering(797) 00:14:10.249 fused_ordering(798) 00:14:10.249 fused_ordering(799) 00:14:10.249 fused_ordering(800) 00:14:10.249 fused_ordering(801) 00:14:10.249 fused_ordering(802) 00:14:10.249 fused_ordering(803) 00:14:10.249 fused_ordering(804) 00:14:10.249 fused_ordering(805) 00:14:10.249 fused_ordering(806) 00:14:10.249 fused_ordering(807) 00:14:10.249 fused_ordering(808) 00:14:10.249 fused_ordering(809) 00:14:10.249 fused_ordering(810) 00:14:10.249 fused_ordering(811) 00:14:10.249 fused_ordering(812) 00:14:10.249 fused_ordering(813) 00:14:10.249 fused_ordering(814) 00:14:10.249 fused_ordering(815) 00:14:10.249 fused_ordering(816) 00:14:10.249 fused_ordering(817) 00:14:10.249 fused_ordering(818) 00:14:10.249 fused_ordering(819) 00:14:10.249 fused_ordering(820) 00:14:10.820 fused_ordering(821) 00:14:10.820 fused_ordering(822) 00:14:10.820 fused_ordering(823) 00:14:10.820 fused_ordering(824) 00:14:10.820 fused_ordering(825) 00:14:10.820 fused_ordering(826) 00:14:10.820 fused_ordering(827) 00:14:10.820 fused_ordering(828) 00:14:10.820 fused_ordering(829) 00:14:10.820 fused_ordering(830) 00:14:10.820 fused_ordering(831) 00:14:10.820 fused_ordering(832) 00:14:10.820 fused_ordering(833) 00:14:10.820 fused_ordering(834) 00:14:10.820 fused_ordering(835) 00:14:10.820 fused_ordering(836) 00:14:10.820 fused_ordering(837) 00:14:10.820 fused_ordering(838) 00:14:10.820 fused_ordering(839) 00:14:10.820 fused_ordering(840) 00:14:10.820 fused_ordering(841) 00:14:10.820 fused_ordering(842) 00:14:10.820 fused_ordering(843) 00:14:10.820 fused_ordering(844) 00:14:10.820 fused_ordering(845) 00:14:10.820 fused_ordering(846) 00:14:10.820 fused_ordering(847) 00:14:10.820 fused_ordering(848) 00:14:10.820 fused_ordering(849) 00:14:10.820 fused_ordering(850) 00:14:10.820 fused_ordering(851) 00:14:10.820 fused_ordering(852) 00:14:10.820 fused_ordering(853) 00:14:10.820 fused_ordering(854) 00:14:10.820 fused_ordering(855) 00:14:10.820 fused_ordering(856) 00:14:10.820 fused_ordering(857) 00:14:10.820 fused_ordering(858) 00:14:10.820 fused_ordering(859) 00:14:10.820 fused_ordering(860) 00:14:10.820 fused_ordering(861) 00:14:10.820 fused_ordering(862) 00:14:10.820 fused_ordering(863) 00:14:10.820 fused_ordering(864) 00:14:10.820 fused_ordering(865) 00:14:10.820 fused_ordering(866) 00:14:10.820 fused_ordering(867) 00:14:10.820 fused_ordering(868) 00:14:10.820 fused_ordering(869) 00:14:10.820 fused_ordering(870) 00:14:10.820 fused_ordering(871) 00:14:10.820 fused_ordering(872) 00:14:10.820 fused_ordering(873) 00:14:10.820 fused_ordering(874) 00:14:10.820 fused_ordering(875) 00:14:10.820 fused_ordering(876) 00:14:10.820 fused_ordering(877) 00:14:10.820 fused_ordering(878) 00:14:10.820 fused_ordering(879) 00:14:10.820 fused_ordering(880) 00:14:10.820 fused_ordering(881) 00:14:10.820 fused_ordering(882) 00:14:10.820 fused_ordering(883) 00:14:10.820 fused_ordering(884) 00:14:10.820 fused_ordering(885) 00:14:10.820 fused_ordering(886) 00:14:10.820 fused_ordering(887) 00:14:10.820 fused_ordering(888) 00:14:10.820 fused_ordering(889) 00:14:10.820 fused_ordering(890) 00:14:10.820 fused_ordering(891) 00:14:10.820 fused_ordering(892) 00:14:10.820 fused_ordering(893) 00:14:10.820 fused_ordering(894) 00:14:10.820 fused_ordering(895) 00:14:10.820 fused_ordering(896) 00:14:10.820 fused_ordering(897) 00:14:10.820 fused_ordering(898) 00:14:10.820 fused_ordering(899) 00:14:10.820 fused_ordering(900) 00:14:10.820 fused_ordering(901) 00:14:10.820 fused_ordering(902) 00:14:10.820 fused_ordering(903) 00:14:10.820 fused_ordering(904) 00:14:10.820 fused_ordering(905) 00:14:10.820 fused_ordering(906) 00:14:10.820 fused_ordering(907) 00:14:10.821 fused_ordering(908) 00:14:10.821 fused_ordering(909) 00:14:10.821 fused_ordering(910) 00:14:10.821 fused_ordering(911) 00:14:10.821 fused_ordering(912) 00:14:10.821 fused_ordering(913) 00:14:10.821 fused_ordering(914) 00:14:10.821 fused_ordering(915) 00:14:10.821 fused_ordering(916) 00:14:10.821 fused_ordering(917) 00:14:10.821 fused_ordering(918) 00:14:10.821 fused_ordering(919) 00:14:10.821 fused_ordering(920) 00:14:10.821 fused_ordering(921) 00:14:10.821 fused_ordering(922) 00:14:10.821 fused_ordering(923) 00:14:10.821 fused_ordering(924) 00:14:10.821 fused_ordering(925) 00:14:10.821 fused_ordering(926) 00:14:10.821 fused_ordering(927) 00:14:10.821 fused_ordering(928) 00:14:10.821 fused_ordering(929) 00:14:10.821 fused_ordering(930) 00:14:10.821 fused_ordering(931) 00:14:10.821 fused_ordering(932) 00:14:10.821 fused_ordering(933) 00:14:10.821 fused_ordering(934) 00:14:10.821 fused_ordering(935) 00:14:10.821 fused_ordering(936) 00:14:10.821 fused_ordering(937) 00:14:10.821 fused_ordering(938) 00:14:10.821 fused_ordering(939) 00:14:10.821 fused_ordering(940) 00:14:10.821 fused_ordering(941) 00:14:10.821 fused_ordering(942) 00:14:10.821 fused_ordering(943) 00:14:10.821 fused_ordering(944) 00:14:10.821 fused_ordering(945) 00:14:10.821 fused_ordering(946) 00:14:10.821 fused_ordering(947) 00:14:10.821 fused_ordering(948) 00:14:10.821 fused_ordering(949) 00:14:10.821 fused_ordering(950) 00:14:10.821 fused_ordering(951) 00:14:10.821 fused_ordering(952) 00:14:10.821 fused_ordering(953) 00:14:10.821 fused_ordering(954) 00:14:10.821 fused_ordering(955) 00:14:10.821 fused_ordering(956) 00:14:10.821 fused_ordering(957) 00:14:10.821 fused_ordering(958) 00:14:10.821 fused_ordering(959) 00:14:10.821 fused_ordering(960) 00:14:10.821 fused_ordering(961) 00:14:10.821 fused_ordering(962) 00:14:10.821 fused_ordering(963) 00:14:10.821 fused_ordering(964) 00:14:10.821 fused_ordering(965) 00:14:10.821 fused_ordering(966) 00:14:10.821 fused_ordering(967) 00:14:10.821 fused_ordering(968) 00:14:10.821 fused_ordering(969) 00:14:10.821 fused_ordering(970) 00:14:10.821 fused_ordering(971) 00:14:10.821 fused_ordering(972) 00:14:10.821 fused_ordering(973) 00:14:10.821 fused_ordering(974) 00:14:10.821 fused_ordering(975) 00:14:10.821 fused_ordering(976) 00:14:10.821 fused_ordering(977) 00:14:10.821 fused_ordering(978) 00:14:10.821 fused_ordering(979) 00:14:10.821 fused_ordering(980) 00:14:10.821 fused_ordering(981) 00:14:10.821 fused_ordering(982) 00:14:10.821 fused_ordering(983) 00:14:10.821 fused_ordering(984) 00:14:10.821 fused_ordering(985) 00:14:10.821 fused_ordering(986) 00:14:10.821 fused_ordering(987) 00:14:10.821 fused_ordering(988) 00:14:10.821 fused_ordering(989) 00:14:10.821 fused_ordering(990) 00:14:10.821 fused_ordering(991) 00:14:10.821 fused_ordering(992) 00:14:10.821 fused_ordering(993) 00:14:10.821 fused_ordering(994) 00:14:10.821 fused_ordering(995) 00:14:10.821 fused_ordering(996) 00:14:10.821 fused_ordering(997) 00:14:10.821 fused_ordering(998) 00:14:10.821 fused_ordering(999) 00:14:10.821 fused_ordering(1000) 00:14:10.821 fused_ordering(1001) 00:14:10.821 fused_ordering(1002) 00:14:10.821 fused_ordering(1003) 00:14:10.821 fused_ordering(1004) 00:14:10.821 fused_ordering(1005) 00:14:10.821 fused_ordering(1006) 00:14:10.821 fused_ordering(1007) 00:14:10.821 fused_ordering(1008) 00:14:10.821 fused_ordering(1009) 00:14:10.821 fused_ordering(1010) 00:14:10.821 fused_ordering(1011) 00:14:10.821 fused_ordering(1012) 00:14:10.821 fused_ordering(1013) 00:14:10.821 fused_ordering(1014) 00:14:10.821 fused_ordering(1015) 00:14:10.821 fused_ordering(1016) 00:14:10.821 fused_ordering(1017) 00:14:10.821 fused_ordering(1018) 00:14:10.821 fused_ordering(1019) 00:14:10.821 fused_ordering(1020) 00:14:10.821 fused_ordering(1021) 00:14:10.821 fused_ordering(1022) 00:14:10.821 fused_ordering(1023) 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.821 rmmod nvme_tcp 00:14:10.821 rmmod nvme_fabrics 00:14:10.821 rmmod nvme_keyring 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3459622 ']' 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3459622 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3459622 ']' 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3459622 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459622 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459622' 00:14:10.821 killing process with pid 3459622 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3459622 00:14:10.821 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3459622 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.083 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.996 00:14:12.996 real 0m12.771s 00:14:12.996 user 0m6.622s 00:14:12.996 sys 0m6.727s 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 ************************************ 00:14:12.996 END TEST nvmf_fused_ordering 00:14:12.996 ************************************ 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.996 11:29:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.259 ************************************ 00:14:13.259 START TEST nvmf_ns_masking 00:14:13.259 ************************************ 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.259 * Looking for test storage... 00:14:13.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.259 --rc genhtml_branch_coverage=1 00:14:13.259 --rc genhtml_function_coverage=1 00:14:13.259 --rc genhtml_legend=1 00:14:13.259 --rc geninfo_all_blocks=1 00:14:13.259 --rc geninfo_unexecuted_blocks=1 00:14:13.259 00:14:13.259 ' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.259 --rc genhtml_branch_coverage=1 00:14:13.259 --rc genhtml_function_coverage=1 00:14:13.259 --rc genhtml_legend=1 00:14:13.259 --rc geninfo_all_blocks=1 00:14:13.259 --rc geninfo_unexecuted_blocks=1 00:14:13.259 00:14:13.259 ' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.259 --rc genhtml_branch_coverage=1 00:14:13.259 --rc genhtml_function_coverage=1 00:14:13.259 --rc genhtml_legend=1 00:14:13.259 --rc geninfo_all_blocks=1 00:14:13.259 --rc geninfo_unexecuted_blocks=1 00:14:13.259 00:14:13.259 ' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.259 --rc genhtml_branch_coverage=1 00:14:13.259 --rc genhtml_function_coverage=1 00:14:13.259 --rc genhtml_legend=1 00:14:13.259 --rc geninfo_all_blocks=1 00:14:13.259 --rc geninfo_unexecuted_blocks=1 00:14:13.259 00:14:13.259 ' 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.259 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bdadaff3-b0b9-464f-bc5d-1e8c7917a1be 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f2f83c09-cfe1-48d0-925f-86ca47d7f63a 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:13.260 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0e3a2b21-4013-40b9-8117-52a8ddf59e33 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.521 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.522 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.522 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:13.522 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:13.522 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:13.522 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.700 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:21.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:21.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:21.701 Found net devices under 0000:31:00.0: cvl_0_0 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:21.701 Found net devices under 0000:31:00.1: cvl_0_1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:21.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:14:21.701 00:14:21.701 --- 10.0.0.2 ping statistics --- 00:14:21.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.701 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:21.701 00:14:21.701 --- 10.0.0.1 ping statistics --- 00:14:21.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.701 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.701 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3464688 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3464688 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3464688 ']' 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.702 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 [2024-12-09 11:29:12.891365] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:21.702 [2024-12-09 11:29:12.891432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.702 [2024-12-09 11:29:12.975524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.702 [2024-12-09 11:29:13.017147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.702 [2024-12-09 11:29:13.017186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.702 [2024-12-09 11:29:13.017194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.702 [2024-12-09 11:29:13.017201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.702 [2024-12-09 11:29:13.017207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.702 [2024-12-09 11:29:13.017809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.702 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.963 [2024-12-09 11:29:13.872859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.963 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:21.963 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:21.963 11:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:21.963 Malloc1 00:14:21.963 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:22.223 Malloc2 00:14:22.223 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:22.485 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:22.485 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.746 [2024-12-09 11:29:14.736629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e3a2b21-4013-40b9-8117-52a8ddf59e33 -a 10.0.0.2 -s 4420 -i 4 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.746 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.293 [ 0]:0x1 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.293 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac0f90c7c88748969342975e056e5c65 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac0f90c7c88748969342975e056e5c65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.293 [ 0]:0x1 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac0f90c7c88748969342975e056e5c65 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac0f90c7c88748969342975e056e5c65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.293 [ 1]:0x2 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:25.293 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.554 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.814 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:25.814 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:25.814 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e3a2b21-4013-40b9-8117-52a8ddf59e33 -a 10.0.0.2 -s 4420 -i 4 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:26.074 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.990 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.250 [ 0]:0x2 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.250 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.510 [ 0]:0x1 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac0f90c7c88748969342975e056e5c65 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac0f90c7c88748969342975e056e5c65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.510 [ 1]:0x2 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.510 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.771 [ 0]:0x2 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.771 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.039 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:29.039 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.039 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:29.039 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.039 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.039 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:29.039 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e3a2b21-4013-40b9-8117-52a8ddf59e33 -a 10.0.0.2 -s 4420 -i 4 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:29.302 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.217 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.479 [ 0]:0x1 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac0f90c7c88748969342975e056e5c65 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac0f90c7c88748969342975e056e5c65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.479 [ 1]:0x2 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.479 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.741 [ 0]:0x2 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:31.741 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.742 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.004 [2024-12-09 11:29:23.967224] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:32.005 request: 00:14:32.005 { 00:14:32.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.005 "nsid": 2, 00:14:32.005 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.005 "method": "nvmf_ns_remove_host", 00:14:32.005 "req_id": 1 00:14:32.005 } 00:14:32.005 Got JSON-RPC error response 00:14:32.005 response: 00:14:32.005 { 00:14:32.005 "code": -32602, 00:14:32.005 "message": "Invalid parameters" 00:14:32.005 } 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.005 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.005 [ 0]:0x2 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c68818207d114c53a1be14658ceace63 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c68818207d114c53a1be14658ceace63 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3466963 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3466963 /var/tmp/host.sock 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3466963 ']' 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.005 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.267 [2024-12-09 11:29:24.207155] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:32.267 [2024-12-09 11:29:24.207206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466963 ] 00:14:32.267 [2024-12-09 11:29:24.295193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.267 [2024-12-09 11:29:24.331509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.210 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.210 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:33.210 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.210 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bdadaff3-b0b9-464f-bc5d-1e8c7917a1be 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BDADAFF3B0B9464FBC5D1E8C7917A1BE -i 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f2f83c09-cfe1-48d0-925f-86ca47d7f63a 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.472 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F2F83C09CFE148D0925F86CA47D7F63A -i 00:14:33.734 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.735 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:33.995 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:33.995 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:34.567 nvme0n1 00:14:34.567 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.567 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.827 nvme1n2 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:34.827 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:35.089 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bdadaff3-b0b9-464f-bc5d-1e8c7917a1be == \b\d\a\d\a\f\f\3\-\b\0\b\9\-\4\6\4\f\-\b\c\5\d\-\1\e\8\c\7\9\1\7\a\1\b\e ]] 00:14:35.089 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:35.089 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:35.089 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:35.350 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f2f83c09-cfe1-48d0-925f-86ca47d7f63a == \f\2\f\8\3\c\0\9\-\c\f\e\1\-\4\8\d\0\-\9\2\5\f\-\8\6\c\a\4\7\d\7\f\6\3\a ]] 00:14:35.350 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.350 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid bdadaff3-b0b9-464f-bc5d-1e8c7917a1be 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BDADAFF3B0B9464FBC5D1E8C7917A1BE 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BDADAFF3B0B9464FBC5D1E8C7917A1BE 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.612 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g BDADAFF3B0B9464FBC5D1E8C7917A1BE 00:14:35.873 [2024-12-09 11:29:27.829777] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:35.873 [2024-12-09 11:29:27.829810] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:35.873 [2024-12-09 11:29:27.829820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.873 request: 00:14:35.873 { 00:14:35.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.873 "namespace": { 00:14:35.873 "bdev_name": "invalid", 00:14:35.873 "nsid": 1, 00:14:35.873 "nguid": "BDADAFF3B0B9464FBC5D1E8C7917A1BE", 00:14:35.873 "no_auto_visible": false, 00:14:35.873 "hide_metadata": false 00:14:35.873 }, 00:14:35.873 "method": "nvmf_subsystem_add_ns", 00:14:35.873 "req_id": 1 00:14:35.873 } 00:14:35.873 Got JSON-RPC error response 00:14:35.873 response: 00:14:35.873 { 00:14:35.873 "code": -32602, 00:14:35.873 "message": "Invalid parameters" 00:14:35.873 } 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid bdadaff3-b0b9-464f-bc5d-1e8c7917a1be 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:35.873 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BDADAFF3B0B9464FBC5D1E8C7917A1BE -i 00:14:35.873 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3466963 ']' 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466963' 00:14:38.420 killing process with pid 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3466963 00:14:38.420 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.682 rmmod nvme_tcp 00:14:38.682 rmmod nvme_fabrics 00:14:38.682 rmmod nvme_keyring 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3464688 ']' 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3464688 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3464688 ']' 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3464688 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464688 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464688' 00:14:38.682 killing process with pid 3464688 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3464688 00:14:38.682 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3464688 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.942 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.857 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:40.857 00:14:40.857 real 0m27.817s 00:14:40.857 user 0m31.657s 00:14:40.857 sys 0m7.922s 00:14:40.857 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.857 11:29:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.857 ************************************ 00:14:40.857 END TEST nvmf_ns_masking 00:14:40.857 ************************************ 00:14:40.857 11:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:40.857 11:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.857 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.857 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.857 11:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.119 ************************************ 00:14:41.119 START TEST nvmf_nvme_cli 00:14:41.119 ************************************ 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.119 * Looking for test storage... 00:14:41.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.119 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:41.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.120 --rc genhtml_branch_coverage=1 00:14:41.120 --rc genhtml_function_coverage=1 00:14:41.120 --rc genhtml_legend=1 00:14:41.120 --rc geninfo_all_blocks=1 00:14:41.120 --rc geninfo_unexecuted_blocks=1 00:14:41.120 00:14:41.120 ' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:41.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.120 --rc genhtml_branch_coverage=1 00:14:41.120 --rc genhtml_function_coverage=1 00:14:41.120 --rc genhtml_legend=1 00:14:41.120 --rc geninfo_all_blocks=1 00:14:41.120 --rc geninfo_unexecuted_blocks=1 00:14:41.120 00:14:41.120 ' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:41.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.120 --rc genhtml_branch_coverage=1 00:14:41.120 --rc genhtml_function_coverage=1 00:14:41.120 --rc genhtml_legend=1 00:14:41.120 --rc geninfo_all_blocks=1 00:14:41.120 --rc geninfo_unexecuted_blocks=1 00:14:41.120 00:14:41.120 ' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:41.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.120 --rc genhtml_branch_coverage=1 00:14:41.120 --rc genhtml_function_coverage=1 00:14:41.120 --rc genhtml_legend=1 00:14:41.120 --rc geninfo_all_blocks=1 00:14:41.120 --rc geninfo_unexecuted_blocks=1 00:14:41.120 00:14:41.120 ' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.120 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.381 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.382 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.525 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.525 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:49.526 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:49.526 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:49.526 Found net devices under 0000:31:00.0: cvl_0_0 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:49.526 Found net devices under 0000:31:00.1: cvl_0_1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:14:49.526 00:14:49.526 --- 10.0.0.2 ping statistics --- 00:14:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.526 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:14:49.526 00:14:49.526 --- 10.0.0.1 ping statistics --- 00:14:49.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.526 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.526 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3472648 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3472648 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3472648 ']' 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.527 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.527 [2024-12-09 11:29:40.864624] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:49.527 [2024-12-09 11:29:40.864690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.527 [2024-12-09 11:29:40.952777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.527 [2024-12-09 11:29:40.995658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.527 [2024-12-09 11:29:40.995694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.527 [2024-12-09 11:29:40.995702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.527 [2024-12-09 11:29:40.995709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.527 [2024-12-09 11:29:40.995714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.527 [2024-12-09 11:29:40.997249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.527 [2024-12-09 11:29:40.997384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.527 [2024-12-09 11:29:40.997544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.527 [2024-12-09 11:29:40.997544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:49.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.527 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 [2024-12-09 11:29:41.717821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 Malloc0 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 Malloc1 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 [2024-12-09 11:29:41.816971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.788 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:50.049 00:14:50.049 Discovery Log Number of Records 2, Generation counter 2 00:14:50.049 =====Discovery Log Entry 0====== 00:14:50.049 trtype: tcp 00:14:50.049 adrfam: ipv4 00:14:50.049 subtype: current discovery subsystem 00:14:50.049 treq: not required 00:14:50.049 portid: 0 00:14:50.049 trsvcid: 4420 00:14:50.049 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:50.049 traddr: 10.0.0.2 00:14:50.049 eflags: explicit discovery connections, duplicate discovery information 00:14:50.049 sectype: none 00:14:50.049 =====Discovery Log Entry 1====== 00:14:50.049 trtype: tcp 00:14:50.049 adrfam: ipv4 00:14:50.049 subtype: nvme subsystem 00:14:50.049 treq: not required 00:14:50.049 portid: 0 00:14:50.049 trsvcid: 4420 00:14:50.049 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:50.049 traddr: 10.0.0.2 00:14:50.049 eflags: none 00:14:50.049 sectype: none 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:50.049 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:51.433 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.350 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:53.610 /dev/nvme0n2 ]] 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.610 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:53.870 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.131 rmmod nvme_tcp 00:14:54.131 rmmod nvme_fabrics 00:14:54.131 rmmod nvme_keyring 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3472648 ']' 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3472648 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3472648 ']' 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3472648 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472648 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472648' 00:14:54.131 killing process with pid 3472648 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3472648 00:14:54.131 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3472648 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.392 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.393 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.393 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.393 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.393 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.939 00:14:56.939 real 0m15.445s 00:14:56.939 user 0m23.880s 00:14:56.939 sys 0m6.340s 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.939 ************************************ 00:14:56.939 END TEST nvmf_nvme_cli 00:14:56.939 ************************************ 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.939 ************************************ 00:14:56.939 START TEST nvmf_vfio_user 00:14:56.939 ************************************ 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.939 * Looking for test storage... 00:14:56.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.939 --rc genhtml_branch_coverage=1 00:14:56.939 --rc genhtml_function_coverage=1 00:14:56.939 --rc genhtml_legend=1 00:14:56.939 --rc geninfo_all_blocks=1 00:14:56.939 --rc geninfo_unexecuted_blocks=1 00:14:56.939 00:14:56.939 ' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.939 --rc genhtml_branch_coverage=1 00:14:56.939 --rc genhtml_function_coverage=1 00:14:56.939 --rc genhtml_legend=1 00:14:56.939 --rc geninfo_all_blocks=1 00:14:56.939 --rc geninfo_unexecuted_blocks=1 00:14:56.939 00:14:56.939 ' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.939 --rc genhtml_branch_coverage=1 00:14:56.939 --rc genhtml_function_coverage=1 00:14:56.939 --rc genhtml_legend=1 00:14:56.939 --rc geninfo_all_blocks=1 00:14:56.939 --rc geninfo_unexecuted_blocks=1 00:14:56.939 00:14:56.939 ' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.939 --rc genhtml_branch_coverage=1 00:14:56.939 --rc genhtml_function_coverage=1 00:14:56.939 --rc genhtml_legend=1 00:14:56.939 --rc geninfo_all_blocks=1 00:14:56.939 --rc geninfo_unexecuted_blocks=1 00:14:56.939 00:14:56.939 ' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.939 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3474344 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3474344' 00:14:56.940 Process pid: 3474344 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3474344 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3474344 ']' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.940 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.940 [2024-12-09 11:29:48.862360] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:14:56.940 [2024-12-09 11:29:48.862449] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.940 [2024-12-09 11:29:48.946492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.940 [2024-12-09 11:29:48.988267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.940 [2024-12-09 11:29:48.988305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.940 [2024-12-09 11:29:48.988313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.940 [2024-12-09 11:29:48.988320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.940 [2024-12-09 11:29:48.988326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.940 [2024-12-09 11:29:48.990119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.940 [2024-12-09 11:29:48.990252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.940 [2024-12-09 11:29:48.990393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.940 [2024-12-09 11:29:48.990394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.880 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.880 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:57.880 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.820 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.820 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.820 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.821 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.821 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.821 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.081 Malloc1 00:14:59.081 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:59.341 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.341 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.601 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.601 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.601 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.863 Malloc2 00:14:59.863 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.863 11:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:00.124 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.387 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.387 [2024-12-09 11:29:52.399309] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:15:00.387 [2024-12-09 11:29:52.399360] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475105 ] 00:15:00.387 [2024-12-09 11:29:52.454149] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:00.387 [2024-12-09 11:29:52.462362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.387 [2024-12-09 11:29:52.462385] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6db3664000 00:15:00.387 [2024-12-09 11:29:52.463356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.464359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.465367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.466371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.467373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.468388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.469389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.470388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.387 [2024-12-09 11:29:52.472018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.387 [2024-12-09 11:29:52.472032] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6db3659000 00:15:00.387 [2024-12-09 11:29:52.473358] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.388 [2024-12-09 11:29:52.494167] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:00.388 [2024-12-09 11:29:52.494196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:00.388 [2024-12-09 11:29:52.496561] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.388 [2024-12-09 11:29:52.496607] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.388 [2024-12-09 11:29:52.496694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:00.388 [2024-12-09 11:29:52.496711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:00.388 [2024-12-09 11:29:52.496717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:00.388 [2024-12-09 11:29:52.497568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:00.388 [2024-12-09 11:29:52.497580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:00.388 [2024-12-09 11:29:52.497588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:00.388 [2024-12-09 11:29:52.498569] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.388 [2024-12-09 11:29:52.498579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:00.388 [2024-12-09 11:29:52.498586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.499572] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:00.388 [2024-12-09 11:29:52.499581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.500581] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:00.388 [2024-12-09 11:29:52.500589] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:00.388 [2024-12-09 11:29:52.500594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.500602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.500710] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:00.388 [2024-12-09 11:29:52.500715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.500720] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:00.388 [2024-12-09 11:29:52.501596] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:00.388 [2024-12-09 11:29:52.502596] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:00.388 [2024-12-09 11:29:52.503611] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.388 [2024-12-09 11:29:52.504609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.388 [2024-12-09 11:29:52.504666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.388 [2024-12-09 11:29:52.505620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:00.388 [2024-12-09 11:29:52.505628] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.388 [2024-12-09 11:29:52.505633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:00.388 [2024-12-09 11:29:52.505662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505685] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.388 [2024-12-09 11:29:52.505690] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.388 [2024-12-09 11:29:52.505694] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.388 [2024-12-09 11:29:52.505708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.388 [2024-12-09 11:29:52.505739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.388 [2024-12-09 11:29:52.505751] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:00.388 [2024-12-09 11:29:52.505756] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:00.388 [2024-12-09 11:29:52.505760] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:00.388 [2024-12-09 11:29:52.505765] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.388 [2024-12-09 11:29:52.505771] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:00.388 [2024-12-09 11:29:52.505776] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:00.388 [2024-12-09 11:29:52.505781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.388 [2024-12-09 11:29:52.505811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.388 [2024-12-09 11:29:52.505823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.388 [2024-12-09 11:29:52.505833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.388 [2024-12-09 11:29:52.505842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.388 [2024-12-09 11:29:52.505851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.388 [2024-12-09 11:29:52.505855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.388 [2024-12-09 11:29:52.505885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.388 [2024-12-09 11:29:52.505891] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:00.388 [2024-12-09 11:29:52.505896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.505918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.388 [2024-12-09 11:29:52.505932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.388 [2024-12-09 11:29:52.505994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.506003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.506025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.388 [2024-12-09 11:29:52.506030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.388 [2024-12-09 11:29:52.506034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.388 [2024-12-09 11:29:52.506040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.388 [2024-12-09 11:29:52.506049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.388 [2024-12-09 11:29:52.506059] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:00.388 [2024-12-09 11:29:52.506073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.506081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.388 [2024-12-09 11:29:52.506088] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.388 [2024-12-09 11:29:52.506092] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.389 [2024-12-09 11:29:52.506097] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.389 [2024-12-09 11:29:52.506104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506155] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.389 [2024-12-09 11:29:52.506159] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.389 [2024-12-09 11:29:52.506163] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.389 [2024-12-09 11:29:52.506168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506226] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.389 [2024-12-09 11:29:52.506230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:00.389 [2024-12-09 11:29:52.506236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:00.389 [2024-12-09 11:29:52.506254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506341] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.389 [2024-12-09 11:29:52.506345] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.389 [2024-12-09 11:29:52.506349] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.389 [2024-12-09 11:29:52.506353] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.389 [2024-12-09 11:29:52.506356] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:00.389 [2024-12-09 11:29:52.506362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.389 [2024-12-09 11:29:52.506370] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.389 [2024-12-09 11:29:52.506375] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.389 [2024-12-09 11:29:52.506378] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.389 [2024-12-09 11:29:52.506384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506391] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.389 [2024-12-09 11:29:52.506396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.389 [2024-12-09 11:29:52.506399] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.389 [2024-12-09 11:29:52.506405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506413] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.389 [2024-12-09 11:29:52.506417] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.389 [2024-12-09 11:29:52.506420] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.389 [2024-12-09 11:29:52.506426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.389 [2024-12-09 11:29:52.506433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.389 [2024-12-09 11:29:52.506465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.389 ===================================================== 00:15:00.389 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.389 ===================================================== 00:15:00.389 Controller Capabilities/Features 00:15:00.389 ================================ 00:15:00.389 Vendor ID: 4e58 00:15:00.389 Subsystem Vendor ID: 4e58 00:15:00.389 Serial Number: SPDK1 00:15:00.389 Model Number: SPDK bdev Controller 00:15:00.389 Firmware Version: 25.01 00:15:00.389 Recommended Arb Burst: 6 00:15:00.389 IEEE OUI Identifier: 8d 6b 50 00:15:00.389 Multi-path I/O 00:15:00.389 May have multiple subsystem ports: Yes 00:15:00.389 May have multiple controllers: Yes 00:15:00.389 Associated with SR-IOV VF: No 00:15:00.389 Max Data Transfer Size: 131072 00:15:00.389 Max Number of Namespaces: 32 00:15:00.389 Max Number of I/O Queues: 127 00:15:00.389 NVMe Specification Version (VS): 1.3 00:15:00.389 NVMe Specification Version (Identify): 1.3 00:15:00.389 Maximum Queue Entries: 256 00:15:00.389 Contiguous Queues Required: Yes 00:15:00.389 Arbitration Mechanisms Supported 00:15:00.389 Weighted Round Robin: Not Supported 00:15:00.389 Vendor Specific: Not Supported 00:15:00.389 Reset Timeout: 15000 ms 00:15:00.389 Doorbell Stride: 4 bytes 00:15:00.389 NVM Subsystem Reset: Not Supported 00:15:00.389 Command Sets Supported 00:15:00.389 NVM Command Set: Supported 00:15:00.389 Boot Partition: Not Supported 00:15:00.389 Memory Page Size Minimum: 4096 bytes 00:15:00.389 Memory Page Size Maximum: 4096 bytes 00:15:00.389 Persistent Memory Region: Not Supported 00:15:00.389 Optional Asynchronous Events Supported 00:15:00.389 Namespace Attribute Notices: Supported 00:15:00.389 Firmware Activation Notices: Not Supported 00:15:00.389 ANA Change Notices: Not Supported 00:15:00.389 PLE Aggregate Log Change Notices: Not Supported 00:15:00.389 LBA Status Info Alert Notices: Not Supported 00:15:00.389 EGE Aggregate Log Change Notices: Not Supported 00:15:00.389 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.389 Zone Descriptor Change Notices: Not Supported 00:15:00.389 Discovery Log Change Notices: Not Supported 00:15:00.389 Controller Attributes 00:15:00.389 128-bit Host Identifier: Supported 00:15:00.389 Non-Operational Permissive Mode: Not Supported 00:15:00.389 NVM Sets: Not Supported 00:15:00.389 Read Recovery Levels: Not Supported 00:15:00.389 Endurance Groups: Not Supported 00:15:00.389 Predictable Latency Mode: Not Supported 00:15:00.389 Traffic Based Keep ALive: Not Supported 00:15:00.389 Namespace Granularity: Not Supported 00:15:00.389 SQ Associations: Not Supported 00:15:00.389 UUID List: Not Supported 00:15:00.389 Multi-Domain Subsystem: Not Supported 00:15:00.389 Fixed Capacity Management: Not Supported 00:15:00.389 Variable Capacity Management: Not Supported 00:15:00.389 Delete Endurance Group: Not Supported 00:15:00.389 Delete NVM Set: Not Supported 00:15:00.389 Extended LBA Formats Supported: Not Supported 00:15:00.389 Flexible Data Placement Supported: Not Supported 00:15:00.389 00:15:00.389 Controller Memory Buffer Support 00:15:00.389 ================================ 00:15:00.389 Supported: No 00:15:00.389 00:15:00.389 Persistent Memory Region Support 00:15:00.389 ================================ 00:15:00.390 Supported: No 00:15:00.390 00:15:00.390 Admin Command Set Attributes 00:15:00.390 ============================ 00:15:00.390 Security Send/Receive: Not Supported 00:15:00.390 Format NVM: Not Supported 00:15:00.390 Firmware Activate/Download: Not Supported 00:15:00.390 Namespace Management: Not Supported 00:15:00.390 Device Self-Test: Not Supported 00:15:00.390 Directives: Not Supported 00:15:00.390 NVMe-MI: Not Supported 00:15:00.390 Virtualization Management: Not Supported 00:15:00.390 Doorbell Buffer Config: Not Supported 00:15:00.390 Get LBA Status Capability: Not Supported 00:15:00.390 Command & Feature Lockdown Capability: Not Supported 00:15:00.390 Abort Command Limit: 4 00:15:00.390 Async Event Request Limit: 4 00:15:00.390 Number of Firmware Slots: N/A 00:15:00.390 Firmware Slot 1 Read-Only: N/A 00:15:00.390 Firmware Activation Without Reset: N/A 00:15:00.390 Multiple Update Detection Support: N/A 00:15:00.390 Firmware Update Granularity: No Information Provided 00:15:00.390 Per-Namespace SMART Log: No 00:15:00.390 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.390 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:00.390 Command Effects Log Page: Supported 00:15:00.390 Get Log Page Extended Data: Supported 00:15:00.390 Telemetry Log Pages: Not Supported 00:15:00.390 Persistent Event Log Pages: Not Supported 00:15:00.390 Supported Log Pages Log Page: May Support 00:15:00.390 Commands Supported & Effects Log Page: Not Supported 00:15:00.390 Feature Identifiers & Effects Log Page:May Support 00:15:00.390 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.390 Data Area 4 for Telemetry Log: Not Supported 00:15:00.390 Error Log Page Entries Supported: 128 00:15:00.390 Keep Alive: Supported 00:15:00.390 Keep Alive Granularity: 10000 ms 00:15:00.390 00:15:00.390 NVM Command Set Attributes 00:15:00.390 ========================== 00:15:00.390 Submission Queue Entry Size 00:15:00.390 Max: 64 00:15:00.390 Min: 64 00:15:00.390 Completion Queue Entry Size 00:15:00.390 Max: 16 00:15:00.390 Min: 16 00:15:00.390 Number of Namespaces: 32 00:15:00.390 Compare Command: Supported 00:15:00.390 Write Uncorrectable Command: Not Supported 00:15:00.390 Dataset Management Command: Supported 00:15:00.390 Write Zeroes Command: Supported 00:15:00.390 Set Features Save Field: Not Supported 00:15:00.390 Reservations: Not Supported 00:15:00.390 Timestamp: Not Supported 00:15:00.390 Copy: Supported 00:15:00.390 Volatile Write Cache: Present 00:15:00.390 Atomic Write Unit (Normal): 1 00:15:00.390 Atomic Write Unit (PFail): 1 00:15:00.390 Atomic Compare & Write Unit: 1 00:15:00.390 Fused Compare & Write: Supported 00:15:00.390 Scatter-Gather List 00:15:00.390 SGL Command Set: Supported (Dword aligned) 00:15:00.390 SGL Keyed: Not Supported 00:15:00.390 SGL Bit Bucket Descriptor: Not Supported 00:15:00.390 SGL Metadata Pointer: Not Supported 00:15:00.390 Oversized SGL: Not Supported 00:15:00.390 SGL Metadata Address: Not Supported 00:15:00.390 SGL Offset: Not Supported 00:15:00.390 Transport SGL Data Block: Not Supported 00:15:00.390 Replay Protected Memory Block: Not Supported 00:15:00.390 00:15:00.390 Firmware Slot Information 00:15:00.390 ========================= 00:15:00.390 Active slot: 1 00:15:00.390 Slot 1 Firmware Revision: 25.01 00:15:00.390 00:15:00.390 00:15:00.390 Commands Supported and Effects 00:15:00.390 ============================== 00:15:00.390 Admin Commands 00:15:00.390 -------------- 00:15:00.390 Get Log Page (02h): Supported 00:15:00.390 Identify (06h): Supported 00:15:00.390 Abort (08h): Supported 00:15:00.390 Set Features (09h): Supported 00:15:00.390 Get Features (0Ah): Supported 00:15:00.390 Asynchronous Event Request (0Ch): Supported 00:15:00.390 Keep Alive (18h): Supported 00:15:00.390 I/O Commands 00:15:00.390 ------------ 00:15:00.390 Flush (00h): Supported LBA-Change 00:15:00.390 Write (01h): Supported LBA-Change 00:15:00.390 Read (02h): Supported 00:15:00.390 Compare (05h): Supported 00:15:00.390 Write Zeroes (08h): Supported LBA-Change 00:15:00.390 Dataset Management (09h): Supported LBA-Change 00:15:00.390 Copy (19h): Supported LBA-Change 00:15:00.390 00:15:00.390 Error Log 00:15:00.390 ========= 00:15:00.390 00:15:00.390 Arbitration 00:15:00.390 =========== 00:15:00.390 Arbitration Burst: 1 00:15:00.390 00:15:00.390 Power Management 00:15:00.390 ================ 00:15:00.390 Number of Power States: 1 00:15:00.390 Current Power State: Power State #0 00:15:00.390 Power State #0: 00:15:00.390 Max Power: 0.00 W 00:15:00.390 Non-Operational State: Operational 00:15:00.390 Entry Latency: Not Reported 00:15:00.390 Exit Latency: Not Reported 00:15:00.390 Relative Read Throughput: 0 00:15:00.390 Relative Read Latency: 0 00:15:00.390 Relative Write Throughput: 0 00:15:00.390 Relative Write Latency: 0 00:15:00.390 Idle Power: Not Reported 00:15:00.390 Active Power: Not Reported 00:15:00.390 Non-Operational Permissive Mode: Not Supported 00:15:00.390 00:15:00.390 Health Information 00:15:00.390 ================== 00:15:00.390 Critical Warnings: 00:15:00.390 Available Spare Space: OK 00:15:00.390 Temperature: OK 00:15:00.390 Device Reliability: OK 00:15:00.390 Read Only: No 00:15:00.390 Volatile Memory Backup: OK 00:15:00.390 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:00.390 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.390 Available Spare: 0% 00:15:00.390 Available Sp[2024-12-09 11:29:52.506567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.390 [2024-12-09 11:29:52.506576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.390 [2024-12-09 11:29:52.506606] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:00.390 [2024-12-09 11:29:52.506616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.390 [2024-12-09 11:29:52.506623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.390 [2024-12-09 11:29:52.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.390 [2024-12-09 11:29:52.506637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.390 [2024-12-09 11:29:52.507629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.390 [2024-12-09 11:29:52.507640] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:00.390 [2024-12-09 11:29:52.508631] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.390 [2024-12-09 11:29:52.508672] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:00.390 [2024-12-09 11:29:52.508678] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:00.390 [2024-12-09 11:29:52.509636] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:00.390 [2024-12-09 11:29:52.509647] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:00.390 [2024-12-09 11:29:52.509705] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:00.390 [2024-12-09 11:29:52.514018] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.653 are Threshold: 0% 00:15:00.653 Life Percentage Used: 0% 00:15:00.653 Data Units Read: 0 00:15:00.653 Data Units Written: 0 00:15:00.653 Host Read Commands: 0 00:15:00.653 Host Write Commands: 0 00:15:00.653 Controller Busy Time: 0 minutes 00:15:00.653 Power Cycles: 0 00:15:00.653 Power On Hours: 0 hours 00:15:00.653 Unsafe Shutdowns: 0 00:15:00.653 Unrecoverable Media Errors: 0 00:15:00.653 Lifetime Error Log Entries: 0 00:15:00.653 Warning Temperature Time: 0 minutes 00:15:00.653 Critical Temperature Time: 0 minutes 00:15:00.653 00:15:00.653 Number of Queues 00:15:00.653 ================ 00:15:00.653 Number of I/O Submission Queues: 127 00:15:00.653 Number of I/O Completion Queues: 127 00:15:00.653 00:15:00.653 Active Namespaces 00:15:00.653 ================= 00:15:00.653 Namespace ID:1 00:15:00.653 Error Recovery Timeout: Unlimited 00:15:00.653 Command Set Identifier: NVM (00h) 00:15:00.653 Deallocate: Supported 00:15:00.653 Deallocated/Unwritten Error: Not Supported 00:15:00.653 Deallocated Read Value: Unknown 00:15:00.653 Deallocate in Write Zeroes: Not Supported 00:15:00.653 Deallocated Guard Field: 0xFFFF 00:15:00.653 Flush: Supported 00:15:00.653 Reservation: Supported 00:15:00.653 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.653 Size (in LBAs): 131072 (0GiB) 00:15:00.653 Capacity (in LBAs): 131072 (0GiB) 00:15:00.653 Utilization (in LBAs): 131072 (0GiB) 00:15:00.653 NGUID: CA18B2DCF9474868BF2405758D24630F 00:15:00.653 UUID: ca18b2dc-f947-4868-bf24-05758d24630f 00:15:00.653 Thin Provisioning: Not Supported 00:15:00.653 Per-NS Atomic Units: Yes 00:15:00.653 Atomic Boundary Size (Normal): 0 00:15:00.653 Atomic Boundary Size (PFail): 0 00:15:00.653 Atomic Boundary Offset: 0 00:15:00.653 Maximum Single Source Range Length: 65535 00:15:00.653 Maximum Copy Length: 65535 00:15:00.653 Maximum Source Range Count: 1 00:15:00.653 NGUID/EUI64 Never Reused: No 00:15:00.653 Namespace Write Protected: No 00:15:00.653 Number of LBA Formats: 1 00:15:00.653 Current LBA Format: LBA Format #00 00:15:00.653 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.653 00:15:00.653 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.653 [2024-12-09 11:29:52.720735] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.944 Initializing NVMe Controllers 00:15:05.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:05.944 Initialization complete. Launching workers. 00:15:05.944 ======================================================== 00:15:05.944 Latency(us) 00:15:05.944 Device Information : IOPS MiB/s Average min max 00:15:05.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39987.46 156.20 3200.68 873.85 9752.35 00:15:05.944 ======================================================== 00:15:05.944 Total : 39987.46 156.20 3200.68 873.85 9752.35 00:15:05.944 00:15:05.944 [2024-12-09 11:29:57.738275] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.944 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.944 [2024-12-09 11:29:57.933164] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.237 Initializing NVMe Controllers 00:15:11.237 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.237 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:11.237 Initialization complete. Launching workers. 00:15:11.237 ======================================================== 00:15:11.237 Latency(us) 00:15:11.237 Device Information : IOPS MiB/s Average min max 00:15:11.237 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16047.18 62.68 7975.97 6862.71 9015.94 00:15:11.237 ======================================================== 00:15:11.237 Total : 16047.18 62.68 7975.97 6862.71 9015.94 00:15:11.237 00:15:11.237 [2024-12-09 11:30:02.965492] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.237 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.237 [2024-12-09 11:30:03.165374] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.532 [2024-12-09 11:30:08.233171] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.532 Initializing NVMe Controllers 00:15:16.532 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.532 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.532 Initialization complete. Launching workers. 00:15:16.532 Starting thread on core 2 00:15:16.532 Starting thread on core 3 00:15:16.532 Starting thread on core 1 00:15:16.532 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.532 [2024-12-09 11:30:08.516415] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.834 [2024-12-09 11:30:11.581761] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.834 Initializing NVMe Controllers 00:15:19.834 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.834 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.834 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:19.834 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:19.834 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:19.834 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:19.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.834 Initialization complete. Launching workers. 00:15:19.834 Starting thread on core 1 with urgent priority queue 00:15:19.834 Starting thread on core 2 with urgent priority queue 00:15:19.834 Starting thread on core 3 with urgent priority queue 00:15:19.834 Starting thread on core 0 with urgent priority queue 00:15:19.834 SPDK bdev Controller (SPDK1 ) core 0: 9118.00 IO/s 10.97 secs/100000 ios 00:15:19.835 SPDK bdev Controller (SPDK1 ) core 1: 11571.00 IO/s 8.64 secs/100000 ios 00:15:19.835 SPDK bdev Controller (SPDK1 ) core 2: 9695.67 IO/s 10.31 secs/100000 ios 00:15:19.835 SPDK bdev Controller (SPDK1 ) core 3: 11966.33 IO/s 8.36 secs/100000 ios 00:15:19.835 ======================================================== 00:15:19.835 00:15:19.835 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.835 [2024-12-09 11:30:11.861701] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.835 Initializing NVMe Controllers 00:15:19.835 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.835 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.835 Namespace ID: 1 size: 0GB 00:15:19.835 Initialization complete. 00:15:19.835 INFO: using host memory buffer for IO 00:15:19.835 Hello world! 00:15:19.835 [2024-12-09 11:30:11.895935] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.835 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.095 [2024-12-09 11:30:12.182457] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.483 Initializing NVMe Controllers 00:15:21.483 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.483 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.483 Initialization complete. Launching workers. 00:15:21.483 submit (in ns) avg, min, max = 8824.9, 3898.3, 4996118.3 00:15:21.483 complete (in ns) avg, min, max = 18405.4, 2423.3, 4994940.8 00:15:21.483 00:15:21.483 Submit histogram 00:15:21.483 ================ 00:15:21.483 Range in us Cumulative Count 00:15:21.483 3.893 - 3.920: 0.5419% ( 100) 00:15:21.483 3.920 - 3.947: 4.2213% ( 679) 00:15:21.483 3.947 - 3.973: 12.0516% ( 1445) 00:15:21.483 3.973 - 4.000: 23.2253% ( 2062) 00:15:21.483 4.000 - 4.027: 35.1306% ( 2197) 00:15:21.483 4.027 - 4.053: 48.5315% ( 2473) 00:15:21.483 4.053 - 4.080: 64.8965% ( 3020) 00:15:21.483 4.080 - 4.107: 80.2699% ( 2837) 00:15:21.483 4.107 - 4.133: 91.2377% ( 2024) 00:15:21.483 4.133 - 4.160: 96.6620% ( 1001) 00:15:21.483 4.160 - 4.187: 98.4936% ( 338) 00:15:21.483 4.187 - 4.213: 99.2251% ( 135) 00:15:21.483 4.213 - 4.240: 99.4635% ( 44) 00:15:21.483 4.240 - 4.267: 99.5123% ( 9) 00:15:21.483 4.267 - 4.293: 99.5231% ( 2) 00:15:21.483 4.293 - 4.320: 99.5394% ( 3) 00:15:21.483 4.320 - 4.347: 99.5448% ( 1) 00:15:21.483 4.507 - 4.533: 99.5502% ( 1) 00:15:21.483 4.773 - 4.800: 99.5557% ( 1) 00:15:21.483 4.800 - 4.827: 99.5611% ( 1) 00:15:21.483 4.853 - 4.880: 99.5665% ( 1) 00:15:21.483 4.880 - 4.907: 99.5719% ( 1) 00:15:21.483 4.933 - 4.960: 99.5773% ( 1) 00:15:21.483 5.147 - 5.173: 99.5827% ( 1) 00:15:21.483 5.440 - 5.467: 99.5882% ( 1) 00:15:21.483 5.547 - 5.573: 99.5936% ( 1) 00:15:21.483 5.680 - 5.707: 99.5990% ( 1) 00:15:21.483 5.787 - 5.813: 99.6044% ( 1) 00:15:21.483 5.840 - 5.867: 99.6098% ( 1) 00:15:21.483 5.973 - 6.000: 99.6207% ( 2) 00:15:21.483 6.000 - 6.027: 99.6261% ( 1) 00:15:21.483 6.053 - 6.080: 99.6315% ( 1) 00:15:21.483 6.080 - 6.107: 99.6369% ( 1) 00:15:21.483 6.107 - 6.133: 99.6424% ( 1) 00:15:21.483 6.160 - 6.187: 99.6478% ( 1) 00:15:21.483 6.240 - 6.267: 99.6532% ( 1) 00:15:21.483 6.267 - 6.293: 99.6586% ( 1) 00:15:21.483 6.480 - 6.507: 99.6640% ( 1) 00:15:21.483 6.560 - 6.587: 99.6694% ( 1) 00:15:21.483 6.613 - 6.640: 99.6749% ( 1) 00:15:21.483 6.720 - 6.747: 99.6857% ( 2) 00:15:21.483 6.827 - 6.880: 99.7128% ( 5) 00:15:21.483 6.880 - 6.933: 99.7182% ( 1) 00:15:21.483 6.933 - 6.987: 99.7291% ( 2) 00:15:21.483 6.987 - 7.040: 99.7345% ( 1) 00:15:21.483 7.040 - 7.093: 99.7453% ( 2) 00:15:21.483 7.093 - 7.147: 99.7616% ( 3) 00:15:21.483 7.147 - 7.200: 99.7778% ( 3) 00:15:21.483 7.200 - 7.253: 99.7941% ( 3) 00:15:21.483 7.253 - 7.307: 99.7995% ( 1) 00:15:21.483 7.360 - 7.413: 99.8049% ( 1) 00:15:21.483 7.413 - 7.467: 99.8158% ( 2) 00:15:21.483 7.520 - 7.573: 99.8266% ( 2) 00:15:21.483 7.627 - 7.680: 99.8320% ( 1) 00:15:21.483 7.733 - 7.787: 99.8374% ( 1) 00:15:21.483 7.893 - 7.947: 99.8483% ( 2) 00:15:21.483 7.947 - 8.000: 99.8537% ( 1) 00:15:21.483 8.160 - 8.213: 99.8591% ( 1) 00:15:21.483 8.640 - 8.693: 99.8645% ( 1) 00:15:21.483 9.493 - 9.547: 99.8699% ( 1) 00:15:21.483 13.387 - 13.440: 99.8754% ( 1) 00:15:21.483 13.867 - 13.973: 99.8808% ( 1) 00:15:21.483 3017.387 - 3031.040: 99.8862% ( 1) 00:15:21.483 3986.773 - 4014.080: 99.9946% ( 20) 00:15:21.483 4969.813 - 4997.120: 100.0000% ( 1) 00:15:21.483 00:15:21.483 Complete histogram 00:15:21.483 ================== 00:15:21.483 Range in us Cumulative Count 00:15:21.483 2.413 - 2.427: 0.0054% ( 1) 00:15:21.483 2.440 - 2.453: 0.1301% ( 23) 00:15:21.483 2.453 - 2.467: 2.1621% ( 375) 00:15:21.483 2.467 - 2.480: 4.6982% ( 468) 00:15:21.483 2.480 - 2.493: 6.1396% ( 266) 00:15:21.483 2.493 - 2.507: 7.0879% ( 175) 00:15:21.483 2.507 - 2.520: 16.7335% ( 1780) 00:15:21.483 2.520 - 2.533: 43.1397% ( 4873) 00:15:21.483 2.533 - 2.547: 65.9315% ( 4206) 00:15:21.483 2.547 - 2.560: 82.6975% ( 3094) 00:15:21.483 2.560 - 2.573: 93.3673% ( 1969) 00:15:21.483 2.573 - 2.587: 97.7078% ( 801) 00:15:21.483 2.587 - [2024-12-09 11:30:13.202981] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.483 2.600: 98.9650% ( 232) 00:15:21.483 2.600 - 2.613: 99.2522% ( 53) 00:15:21.483 2.613 - 2.627: 99.3064% ( 10) 00:15:21.483 2.627 - 2.640: 99.3281% ( 4) 00:15:21.483 3.160 - 3.173: 99.3335% ( 1) 00:15:21.483 3.240 - 3.253: 99.3389% ( 1) 00:15:21.483 4.453 - 4.480: 99.3443% ( 1) 00:15:21.483 4.507 - 4.533: 99.3497% ( 1) 00:15:21.483 4.560 - 4.587: 99.3552% ( 1) 00:15:21.483 4.640 - 4.667: 99.3660% ( 2) 00:15:21.483 4.667 - 4.693: 99.3714% ( 1) 00:15:21.483 4.693 - 4.720: 99.3768% ( 1) 00:15:21.483 4.880 - 4.907: 99.3877% ( 2) 00:15:21.483 4.907 - 4.933: 99.3931% ( 1) 00:15:21.483 4.960 - 4.987: 99.4039% ( 2) 00:15:21.483 5.013 - 5.040: 99.4093% ( 1) 00:15:21.483 5.040 - 5.067: 99.4148% ( 1) 00:15:21.483 5.093 - 5.120: 99.4256% ( 2) 00:15:21.483 5.120 - 5.147: 99.4310% ( 1) 00:15:21.483 5.173 - 5.200: 99.4364% ( 1) 00:15:21.483 5.200 - 5.227: 99.4419% ( 1) 00:15:21.483 5.253 - 5.280: 99.4473% ( 1) 00:15:21.483 5.307 - 5.333: 99.4527% ( 1) 00:15:21.483 5.413 - 5.440: 99.4581% ( 1) 00:15:21.483 5.440 - 5.467: 99.4689% ( 2) 00:15:21.483 5.467 - 5.493: 99.4744% ( 1) 00:15:21.483 5.547 - 5.573: 99.4906% ( 3) 00:15:21.483 5.627 - 5.653: 99.5123% ( 4) 00:15:21.483 5.733 - 5.760: 99.5177% ( 1) 00:15:21.483 5.760 - 5.787: 99.5286% ( 2) 00:15:21.483 5.867 - 5.893: 99.5340% ( 1) 00:15:21.483 5.893 - 5.920: 99.5394% ( 1) 00:15:21.483 5.920 - 5.947: 99.5448% ( 1) 00:15:21.483 6.000 - 6.027: 99.5502% ( 1) 00:15:21.483 6.053 - 6.080: 99.5557% ( 1) 00:15:21.484 6.133 - 6.160: 99.5611% ( 1) 00:15:21.484 6.507 - 6.533: 99.5719% ( 2) 00:15:21.484 6.587 - 6.613: 99.5773% ( 1) 00:15:21.484 6.987 - 7.040: 99.5827% ( 1) 00:15:21.484 8.373 - 8.427: 99.5882% ( 1) 00:15:21.484 12.320 - 12.373: 99.5936% ( 1) 00:15:21.484 12.960 - 13.013: 99.5990% ( 1) 00:15:21.484 13.867 - 13.973: 99.6044% ( 1) 00:15:21.484 3986.773 - 4014.080: 99.9892% ( 71) 00:15:21.484 4041.387 - 4068.693: 99.9946% ( 1) 00:15:21.484 4969.813 - 4997.120: 100.0000% ( 1) 00:15:21.484 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.484 [ 00:15:21.484 { 00:15:21.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.484 "subtype": "Discovery", 00:15:21.484 "listen_addresses": [], 00:15:21.484 "allow_any_host": true, 00:15:21.484 "hosts": [] 00:15:21.484 }, 00:15:21.484 { 00:15:21.484 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.484 "subtype": "NVMe", 00:15:21.484 "listen_addresses": [ 00:15:21.484 { 00:15:21.484 "trtype": "VFIOUSER", 00:15:21.484 "adrfam": "IPv4", 00:15:21.484 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.484 "trsvcid": "0" 00:15:21.484 } 00:15:21.484 ], 00:15:21.484 "allow_any_host": true, 00:15:21.484 "hosts": [], 00:15:21.484 "serial_number": "SPDK1", 00:15:21.484 "model_number": "SPDK bdev Controller", 00:15:21.484 "max_namespaces": 32, 00:15:21.484 "min_cntlid": 1, 00:15:21.484 "max_cntlid": 65519, 00:15:21.484 "namespaces": [ 00:15:21.484 { 00:15:21.484 "nsid": 1, 00:15:21.484 "bdev_name": "Malloc1", 00:15:21.484 "name": "Malloc1", 00:15:21.484 "nguid": "CA18B2DCF9474868BF2405758D24630F", 00:15:21.484 "uuid": "ca18b2dc-f947-4868-bf24-05758d24630f" 00:15:21.484 } 00:15:21.484 ] 00:15:21.484 }, 00:15:21.484 { 00:15:21.484 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.484 "subtype": "NVMe", 00:15:21.484 "listen_addresses": [ 00:15:21.484 { 00:15:21.484 "trtype": "VFIOUSER", 00:15:21.484 "adrfam": "IPv4", 00:15:21.484 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.484 "trsvcid": "0" 00:15:21.484 } 00:15:21.484 ], 00:15:21.484 "allow_any_host": true, 00:15:21.484 "hosts": [], 00:15:21.484 "serial_number": "SPDK2", 00:15:21.484 "model_number": "SPDK bdev Controller", 00:15:21.484 "max_namespaces": 32, 00:15:21.484 "min_cntlid": 1, 00:15:21.484 "max_cntlid": 65519, 00:15:21.484 "namespaces": [ 00:15:21.484 { 00:15:21.484 "nsid": 1, 00:15:21.484 "bdev_name": "Malloc2", 00:15:21.484 "name": "Malloc2", 00:15:21.484 "nguid": "8D6BD9DBF2BE46FEB802D120A8A79A0A", 00:15:21.484 "uuid": "8d6bd9db-f2be-46fe-b802-d120a8a79a0a" 00:15:21.484 } 00:15:21.484 ] 00:15:21.484 } 00:15:21.484 ] 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3479737 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.484 Malloc3 00:15:21.484 [2024-12-09 11:30:13.616466] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.484 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.746 [2024-12-09 11:30:13.797680] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.746 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.746 Asynchronous Event Request test 00:15:21.746 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.746 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.746 Registering asynchronous event callbacks... 00:15:21.746 Starting namespace attribute notice tests for all controllers... 00:15:21.746 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.746 aer_cb - Changed Namespace 00:15:21.746 Cleaning up... 00:15:22.008 [ 00:15:22.008 { 00:15:22.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.008 "subtype": "Discovery", 00:15:22.008 "listen_addresses": [], 00:15:22.008 "allow_any_host": true, 00:15:22.008 "hosts": [] 00:15:22.008 }, 00:15:22.008 { 00:15:22.008 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.008 "subtype": "NVMe", 00:15:22.008 "listen_addresses": [ 00:15:22.008 { 00:15:22.008 "trtype": "VFIOUSER", 00:15:22.008 "adrfam": "IPv4", 00:15:22.008 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.008 "trsvcid": "0" 00:15:22.008 } 00:15:22.008 ], 00:15:22.008 "allow_any_host": true, 00:15:22.008 "hosts": [], 00:15:22.008 "serial_number": "SPDK1", 00:15:22.008 "model_number": "SPDK bdev Controller", 00:15:22.008 "max_namespaces": 32, 00:15:22.008 "min_cntlid": 1, 00:15:22.008 "max_cntlid": 65519, 00:15:22.008 "namespaces": [ 00:15:22.008 { 00:15:22.008 "nsid": 1, 00:15:22.008 "bdev_name": "Malloc1", 00:15:22.008 "name": "Malloc1", 00:15:22.008 "nguid": "CA18B2DCF9474868BF2405758D24630F", 00:15:22.008 "uuid": "ca18b2dc-f947-4868-bf24-05758d24630f" 00:15:22.008 }, 00:15:22.008 { 00:15:22.008 "nsid": 2, 00:15:22.008 "bdev_name": "Malloc3", 00:15:22.008 "name": "Malloc3", 00:15:22.008 "nguid": "7DD0DAB3966E41E9B4C84D07D856BD06", 00:15:22.008 "uuid": "7dd0dab3-966e-41e9-b4c8-4d07d856bd06" 00:15:22.008 } 00:15:22.009 ] 00:15:22.009 }, 00:15:22.009 { 00:15:22.009 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.009 "subtype": "NVMe", 00:15:22.009 "listen_addresses": [ 00:15:22.009 { 00:15:22.009 "trtype": "VFIOUSER", 00:15:22.009 "adrfam": "IPv4", 00:15:22.009 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.009 "trsvcid": "0" 00:15:22.009 } 00:15:22.009 ], 00:15:22.009 "allow_any_host": true, 00:15:22.009 "hosts": [], 00:15:22.009 "serial_number": "SPDK2", 00:15:22.009 "model_number": "SPDK bdev Controller", 00:15:22.009 "max_namespaces": 32, 00:15:22.009 "min_cntlid": 1, 00:15:22.009 "max_cntlid": 65519, 00:15:22.009 "namespaces": [ 00:15:22.009 { 00:15:22.009 "nsid": 1, 00:15:22.009 "bdev_name": "Malloc2", 00:15:22.009 "name": "Malloc2", 00:15:22.009 "nguid": "8D6BD9DBF2BE46FEB802D120A8A79A0A", 00:15:22.009 "uuid": "8d6bd9db-f2be-46fe-b802-d120a8a79a0a" 00:15:22.009 } 00:15:22.009 ] 00:15:22.009 } 00:15:22.009 ] 00:15:22.009 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3479737 00:15:22.009 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.009 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.009 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.009 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.009 [2024-12-09 11:30:14.026292] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:15:22.009 [2024-12-09 11:30:14.026338] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479748 ] 00:15:22.009 [2024-12-09 11:30:14.081046] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:22.009 [2024-12-09 11:30:14.089233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.009 [2024-12-09 11:30:14.089257] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbe7abbd000 00:15:22.009 [2024-12-09 11:30:14.090234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.091241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.092247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.093256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.094267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.095276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.096284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.097289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.009 [2024-12-09 11:30:14.098296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.009 [2024-12-09 11:30:14.098307] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbe7abb2000 00:15:22.009 [2024-12-09 11:30:14.099630] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.009 [2024-12-09 11:30:14.121163] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:22.009 [2024-12-09 11:30:14.121188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:22.009 [2024-12-09 11:30:14.123242] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.009 [2024-12-09 11:30:14.123286] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.009 [2024-12-09 11:30:14.123369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:22.009 [2024-12-09 11:30:14.123384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:22.009 [2024-12-09 11:30:14.123390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:22.009 [2024-12-09 11:30:14.124246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:22.009 [2024-12-09 11:30:14.124258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:22.009 [2024-12-09 11:30:14.124266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:22.009 [2024-12-09 11:30:14.125249] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.009 [2024-12-09 11:30:14.125259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:22.009 [2024-12-09 11:30:14.125267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.126252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:22.009 [2024-12-09 11:30:14.126262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.127258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:22.009 [2024-12-09 11:30:14.127267] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:22.009 [2024-12-09 11:30:14.127273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.127280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.127388] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:22.009 [2024-12-09 11:30:14.127393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.127398] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:22.009 [2024-12-09 11:30:14.128264] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:22.009 [2024-12-09 11:30:14.129277] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:22.009 [2024-12-09 11:30:14.130288] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.009 [2024-12-09 11:30:14.131295] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.009 [2024-12-09 11:30:14.131334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.009 [2024-12-09 11:30:14.132300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:22.009 [2024-12-09 11:30:14.132310] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.009 [2024-12-09 11:30:14.132315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.132336] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:22.009 [2024-12-09 11:30:14.132344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.132360] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.009 [2024-12-09 11:30:14.132366] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.009 [2024-12-09 11:30:14.132371] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.009 [2024-12-09 11:30:14.132385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.009 [2024-12-09 11:30:14.139019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.009 [2024-12-09 11:30:14.139033] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:22.009 [2024-12-09 11:30:14.139039] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:22.009 [2024-12-09 11:30:14.139043] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:22.009 [2024-12-09 11:30:14.139048] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.009 [2024-12-09 11:30:14.139053] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:22.009 [2024-12-09 11:30:14.139058] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:22.009 [2024-12-09 11:30:14.139063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.139070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.139081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.009 [2024-12-09 11:30:14.147016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.009 [2024-12-09 11:30:14.147029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.009 [2024-12-09 11:30:14.147038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.009 [2024-12-09 11:30:14.147046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.009 [2024-12-09 11:30:14.147055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.009 [2024-12-09 11:30:14.147060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.147069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.147079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.009 [2024-12-09 11:30:14.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.009 [2024-12-09 11:30:14.155024] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:22.009 [2024-12-09 11:30:14.155029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.155036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.155042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.155053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.009 [2024-12-09 11:30:14.163019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.009 [2024-12-09 11:30:14.163085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.163094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.009 [2024-12-09 11:30:14.163102] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.009 [2024-12-09 11:30:14.163107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.009 [2024-12-09 11:30:14.163110] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.009 [2024-12-09 11:30:14.163117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.171033] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:22.272 [2024-12-09 11:30:14.171046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.171054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.171061] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.272 [2024-12-09 11:30:14.171066] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.272 [2024-12-09 11:30:14.171069] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.272 [2024-12-09 11:30:14.171076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.179016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.179031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.179040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.179048] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.272 [2024-12-09 11:30:14.179052] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.272 [2024-12-09 11:30:14.179056] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.272 [2024-12-09 11:30:14.179062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.187016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.187026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187069] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.272 [2024-12-09 11:30:14.187074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:22.272 [2024-12-09 11:30:14.187079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:22.272 [2024-12-09 11:30:14.187096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.195018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.195033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.203019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.203033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.211018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.211032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.219018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.219034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.272 [2024-12-09 11:30:14.219039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.272 [2024-12-09 11:30:14.219043] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.272 [2024-12-09 11:30:14.219047] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.272 [2024-12-09 11:30:14.219050] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:22.272 [2024-12-09 11:30:14.219057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.272 [2024-12-09 11:30:14.219065] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.272 [2024-12-09 11:30:14.219069] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.272 [2024-12-09 11:30:14.219072] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.272 [2024-12-09 11:30:14.219078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.219086] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.272 [2024-12-09 11:30:14.219090] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.272 [2024-12-09 11:30:14.219094] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.272 [2024-12-09 11:30:14.219102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.219110] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.272 [2024-12-09 11:30:14.219115] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.272 [2024-12-09 11:30:14.219118] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:22.272 [2024-12-09 11:30:14.219124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.272 [2024-12-09 11:30:14.227017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.227032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.227043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.272 [2024-12-09 11:30:14.227050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.272 ===================================================== 00:15:22.272 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.272 ===================================================== 00:15:22.272 Controller Capabilities/Features 00:15:22.272 ================================ 00:15:22.272 Vendor ID: 4e58 00:15:22.272 Subsystem Vendor ID: 4e58 00:15:22.272 Serial Number: SPDK2 00:15:22.272 Model Number: SPDK bdev Controller 00:15:22.272 Firmware Version: 25.01 00:15:22.272 Recommended Arb Burst: 6 00:15:22.272 IEEE OUI Identifier: 8d 6b 50 00:15:22.272 Multi-path I/O 00:15:22.272 May have multiple subsystem ports: Yes 00:15:22.272 May have multiple controllers: Yes 00:15:22.272 Associated with SR-IOV VF: No 00:15:22.272 Max Data Transfer Size: 131072 00:15:22.272 Max Number of Namespaces: 32 00:15:22.272 Max Number of I/O Queues: 127 00:15:22.272 NVMe Specification Version (VS): 1.3 00:15:22.272 NVMe Specification Version (Identify): 1.3 00:15:22.272 Maximum Queue Entries: 256 00:15:22.272 Contiguous Queues Required: Yes 00:15:22.272 Arbitration Mechanisms Supported 00:15:22.272 Weighted Round Robin: Not Supported 00:15:22.272 Vendor Specific: Not Supported 00:15:22.272 Reset Timeout: 15000 ms 00:15:22.272 Doorbell Stride: 4 bytes 00:15:22.272 NVM Subsystem Reset: Not Supported 00:15:22.272 Command Sets Supported 00:15:22.272 NVM Command Set: Supported 00:15:22.272 Boot Partition: Not Supported 00:15:22.272 Memory Page Size Minimum: 4096 bytes 00:15:22.272 Memory Page Size Maximum: 4096 bytes 00:15:22.272 Persistent Memory Region: Not Supported 00:15:22.272 Optional Asynchronous Events Supported 00:15:22.272 Namespace Attribute Notices: Supported 00:15:22.272 Firmware Activation Notices: Not Supported 00:15:22.272 ANA Change Notices: Not Supported 00:15:22.272 PLE Aggregate Log Change Notices: Not Supported 00:15:22.272 LBA Status Info Alert Notices: Not Supported 00:15:22.272 EGE Aggregate Log Change Notices: Not Supported 00:15:22.273 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.273 Zone Descriptor Change Notices: Not Supported 00:15:22.273 Discovery Log Change Notices: Not Supported 00:15:22.273 Controller Attributes 00:15:22.273 128-bit Host Identifier: Supported 00:15:22.273 Non-Operational Permissive Mode: Not Supported 00:15:22.273 NVM Sets: Not Supported 00:15:22.273 Read Recovery Levels: Not Supported 00:15:22.273 Endurance Groups: Not Supported 00:15:22.273 Predictable Latency Mode: Not Supported 00:15:22.273 Traffic Based Keep ALive: Not Supported 00:15:22.273 Namespace Granularity: Not Supported 00:15:22.273 SQ Associations: Not Supported 00:15:22.273 UUID List: Not Supported 00:15:22.273 Multi-Domain Subsystem: Not Supported 00:15:22.273 Fixed Capacity Management: Not Supported 00:15:22.273 Variable Capacity Management: Not Supported 00:15:22.273 Delete Endurance Group: Not Supported 00:15:22.273 Delete NVM Set: Not Supported 00:15:22.273 Extended LBA Formats Supported: Not Supported 00:15:22.273 Flexible Data Placement Supported: Not Supported 00:15:22.273 00:15:22.273 Controller Memory Buffer Support 00:15:22.273 ================================ 00:15:22.273 Supported: No 00:15:22.273 00:15:22.273 Persistent Memory Region Support 00:15:22.273 ================================ 00:15:22.273 Supported: No 00:15:22.273 00:15:22.273 Admin Command Set Attributes 00:15:22.273 ============================ 00:15:22.273 Security Send/Receive: Not Supported 00:15:22.273 Format NVM: Not Supported 00:15:22.273 Firmware Activate/Download: Not Supported 00:15:22.273 Namespace Management: Not Supported 00:15:22.273 Device Self-Test: Not Supported 00:15:22.273 Directives: Not Supported 00:15:22.273 NVMe-MI: Not Supported 00:15:22.273 Virtualization Management: Not Supported 00:15:22.273 Doorbell Buffer Config: Not Supported 00:15:22.273 Get LBA Status Capability: Not Supported 00:15:22.273 Command & Feature Lockdown Capability: Not Supported 00:15:22.273 Abort Command Limit: 4 00:15:22.273 Async Event Request Limit: 4 00:15:22.273 Number of Firmware Slots: N/A 00:15:22.273 Firmware Slot 1 Read-Only: N/A 00:15:22.273 Firmware Activation Without Reset: N/A 00:15:22.273 Multiple Update Detection Support: N/A 00:15:22.273 Firmware Update Granularity: No Information Provided 00:15:22.273 Per-Namespace SMART Log: No 00:15:22.273 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.273 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:22.273 Command Effects Log Page: Supported 00:15:22.273 Get Log Page Extended Data: Supported 00:15:22.273 Telemetry Log Pages: Not Supported 00:15:22.273 Persistent Event Log Pages: Not Supported 00:15:22.273 Supported Log Pages Log Page: May Support 00:15:22.273 Commands Supported & Effects Log Page: Not Supported 00:15:22.273 Feature Identifiers & Effects Log Page:May Support 00:15:22.273 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.273 Data Area 4 for Telemetry Log: Not Supported 00:15:22.273 Error Log Page Entries Supported: 128 00:15:22.273 Keep Alive: Supported 00:15:22.273 Keep Alive Granularity: 10000 ms 00:15:22.273 00:15:22.273 NVM Command Set Attributes 00:15:22.273 ========================== 00:15:22.273 Submission Queue Entry Size 00:15:22.273 Max: 64 00:15:22.273 Min: 64 00:15:22.273 Completion Queue Entry Size 00:15:22.273 Max: 16 00:15:22.273 Min: 16 00:15:22.273 Number of Namespaces: 32 00:15:22.273 Compare Command: Supported 00:15:22.273 Write Uncorrectable Command: Not Supported 00:15:22.273 Dataset Management Command: Supported 00:15:22.273 Write Zeroes Command: Supported 00:15:22.273 Set Features Save Field: Not Supported 00:15:22.273 Reservations: Not Supported 00:15:22.273 Timestamp: Not Supported 00:15:22.273 Copy: Supported 00:15:22.273 Volatile Write Cache: Present 00:15:22.273 Atomic Write Unit (Normal): 1 00:15:22.273 Atomic Write Unit (PFail): 1 00:15:22.273 Atomic Compare & Write Unit: 1 00:15:22.273 Fused Compare & Write: Supported 00:15:22.273 Scatter-Gather List 00:15:22.273 SGL Command Set: Supported (Dword aligned) 00:15:22.273 SGL Keyed: Not Supported 00:15:22.273 SGL Bit Bucket Descriptor: Not Supported 00:15:22.273 SGL Metadata Pointer: Not Supported 00:15:22.273 Oversized SGL: Not Supported 00:15:22.273 SGL Metadata Address: Not Supported 00:15:22.273 SGL Offset: Not Supported 00:15:22.273 Transport SGL Data Block: Not Supported 00:15:22.273 Replay Protected Memory Block: Not Supported 00:15:22.273 00:15:22.273 Firmware Slot Information 00:15:22.273 ========================= 00:15:22.273 Active slot: 1 00:15:22.273 Slot 1 Firmware Revision: 25.01 00:15:22.273 00:15:22.273 00:15:22.273 Commands Supported and Effects 00:15:22.273 ============================== 00:15:22.273 Admin Commands 00:15:22.273 -------------- 00:15:22.273 Get Log Page (02h): Supported 00:15:22.273 Identify (06h): Supported 00:15:22.273 Abort (08h): Supported 00:15:22.273 Set Features (09h): Supported 00:15:22.273 Get Features (0Ah): Supported 00:15:22.273 Asynchronous Event Request (0Ch): Supported 00:15:22.273 Keep Alive (18h): Supported 00:15:22.273 I/O Commands 00:15:22.273 ------------ 00:15:22.273 Flush (00h): Supported LBA-Change 00:15:22.273 Write (01h): Supported LBA-Change 00:15:22.273 Read (02h): Supported 00:15:22.273 Compare (05h): Supported 00:15:22.273 Write Zeroes (08h): Supported LBA-Change 00:15:22.273 Dataset Management (09h): Supported LBA-Change 00:15:22.273 Copy (19h): Supported LBA-Change 00:15:22.273 00:15:22.273 Error Log 00:15:22.273 ========= 00:15:22.273 00:15:22.273 Arbitration 00:15:22.273 =========== 00:15:22.273 Arbitration Burst: 1 00:15:22.273 00:15:22.273 Power Management 00:15:22.273 ================ 00:15:22.273 Number of Power States: 1 00:15:22.273 Current Power State: Power State #0 00:15:22.273 Power State #0: 00:15:22.273 Max Power: 0.00 W 00:15:22.273 Non-Operational State: Operational 00:15:22.273 Entry Latency: Not Reported 00:15:22.273 Exit Latency: Not Reported 00:15:22.273 Relative Read Throughput: 0 00:15:22.273 Relative Read Latency: 0 00:15:22.273 Relative Write Throughput: 0 00:15:22.273 Relative Write Latency: 0 00:15:22.273 Idle Power: Not Reported 00:15:22.273 Active Power: Not Reported 00:15:22.273 Non-Operational Permissive Mode: Not Supported 00:15:22.273 00:15:22.273 Health Information 00:15:22.273 ================== 00:15:22.273 Critical Warnings: 00:15:22.273 Available Spare Space: OK 00:15:22.273 Temperature: OK 00:15:22.273 Device Reliability: OK 00:15:22.273 Read Only: No 00:15:22.273 Volatile Memory Backup: OK 00:15:22.273 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:22.273 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.273 Available Spare: 0% 00:15:22.273 Available Sp[2024-12-09 11:30:14.227152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.273 [2024-12-09 11:30:14.235021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.273 [2024-12-09 11:30:14.235056] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:22.273 [2024-12-09 11:30:14.235066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.273 [2024-12-09 11:30:14.235073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.273 [2024-12-09 11:30:14.235079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.273 [2024-12-09 11:30:14.235086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.273 [2024-12-09 11:30:14.235141] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.273 [2024-12-09 11:30:14.235152] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:22.273 [2024-12-09 11:30:14.236145] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.273 [2024-12-09 11:30:14.236196] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:22.273 [2024-12-09 11:30:14.236203] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:22.273 [2024-12-09 11:30:14.237150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:22.273 [2024-12-09 11:30:14.237163] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:22.273 [2024-12-09 11:30:14.237211] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:22.273 [2024-12-09 11:30:14.240018] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.273 are Threshold: 0% 00:15:22.273 Life Percentage Used: 0% 00:15:22.273 Data Units Read: 0 00:15:22.273 Data Units Written: 0 00:15:22.273 Host Read Commands: 0 00:15:22.273 Host Write Commands: 0 00:15:22.273 Controller Busy Time: 0 minutes 00:15:22.273 Power Cycles: 0 00:15:22.273 Power On Hours: 0 hours 00:15:22.273 Unsafe Shutdowns: 0 00:15:22.273 Unrecoverable Media Errors: 0 00:15:22.273 Lifetime Error Log Entries: 0 00:15:22.273 Warning Temperature Time: 0 minutes 00:15:22.274 Critical Temperature Time: 0 minutes 00:15:22.274 00:15:22.274 Number of Queues 00:15:22.274 ================ 00:15:22.274 Number of I/O Submission Queues: 127 00:15:22.274 Number of I/O Completion Queues: 127 00:15:22.274 00:15:22.274 Active Namespaces 00:15:22.274 ================= 00:15:22.274 Namespace ID:1 00:15:22.274 Error Recovery Timeout: Unlimited 00:15:22.274 Command Set Identifier: NVM (00h) 00:15:22.274 Deallocate: Supported 00:15:22.274 Deallocated/Unwritten Error: Not Supported 00:15:22.274 Deallocated Read Value: Unknown 00:15:22.274 Deallocate in Write Zeroes: Not Supported 00:15:22.274 Deallocated Guard Field: 0xFFFF 00:15:22.274 Flush: Supported 00:15:22.274 Reservation: Supported 00:15:22.274 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.274 Size (in LBAs): 131072 (0GiB) 00:15:22.274 Capacity (in LBAs): 131072 (0GiB) 00:15:22.274 Utilization (in LBAs): 131072 (0GiB) 00:15:22.274 NGUID: 8D6BD9DBF2BE46FEB802D120A8A79A0A 00:15:22.274 UUID: 8d6bd9db-f2be-46fe-b802-d120a8a79a0a 00:15:22.274 Thin Provisioning: Not Supported 00:15:22.274 Per-NS Atomic Units: Yes 00:15:22.274 Atomic Boundary Size (Normal): 0 00:15:22.274 Atomic Boundary Size (PFail): 0 00:15:22.274 Atomic Boundary Offset: 0 00:15:22.274 Maximum Single Source Range Length: 65535 00:15:22.274 Maximum Copy Length: 65535 00:15:22.274 Maximum Source Range Count: 1 00:15:22.274 NGUID/EUI64 Never Reused: No 00:15:22.274 Namespace Write Protected: No 00:15:22.274 Number of LBA Formats: 1 00:15:22.274 Current LBA Format: LBA Format #00 00:15:22.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.274 00:15:22.274 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.536 [2024-12-09 11:30:14.435086] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.835 Initializing NVMe Controllers 00:15:27.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.835 Initialization complete. Launching workers. 00:15:27.835 ======================================================== 00:15:27.835 Latency(us) 00:15:27.835 Device Information : IOPS MiB/s Average min max 00:15:27.835 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40014.01 156.30 3198.76 869.98 6876.08 00:15:27.835 ======================================================== 00:15:27.835 Total : 40014.01 156.30 3198.76 869.98 6876.08 00:15:27.835 00:15:27.835 [2024-12-09 11:30:19.540205] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.835 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.835 [2024-12-09 11:30:19.731779] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.128 Initializing NVMe Controllers 00:15:33.128 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:33.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:33.128 Initialization complete. Launching workers. 00:15:33.128 ======================================================== 00:15:33.128 Latency(us) 00:15:33.128 Device Information : IOPS MiB/s Average min max 00:15:33.128 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33825.97 132.13 3783.65 1131.15 7476.87 00:15:33.128 ======================================================== 00:15:33.128 Total : 33825.97 132.13 3783.65 1131.15 7476.87 00:15:33.128 00:15:33.128 [2024-12-09 11:30:24.750520] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.128 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:33.128 [2024-12-09 11:30:24.951688] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.419 [2024-12-09 11:30:30.097099] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.419 Initializing NVMe Controllers 00:15:38.419 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.419 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:38.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:38.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:38.419 Initialization complete. Launching workers. 00:15:38.419 Starting thread on core 2 00:15:38.419 Starting thread on core 3 00:15:38.419 Starting thread on core 1 00:15:38.419 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:38.419 [2024-12-09 11:30:30.376740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.722 [2024-12-09 11:30:33.446924] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.722 Initializing NVMe Controllers 00:15:41.722 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.722 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.722 Initialization complete. Launching workers. 00:15:41.722 Starting thread on core 1 with urgent priority queue 00:15:41.722 Starting thread on core 2 with urgent priority queue 00:15:41.722 Starting thread on core 3 with urgent priority queue 00:15:41.722 Starting thread on core 0 with urgent priority queue 00:15:41.722 SPDK bdev Controller (SPDK2 ) core 0: 11832.33 IO/s 8.45 secs/100000 ios 00:15:41.722 SPDK bdev Controller (SPDK2 ) core 1: 8027.67 IO/s 12.46 secs/100000 ios 00:15:41.722 SPDK bdev Controller (SPDK2 ) core 2: 12753.33 IO/s 7.84 secs/100000 ios 00:15:41.722 SPDK bdev Controller (SPDK2 ) core 3: 10499.67 IO/s 9.52 secs/100000 ios 00:15:41.722 ======================================================== 00:15:41.722 00:15:41.722 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.722 [2024-12-09 11:30:33.727547] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.722 Initializing NVMe Controllers 00:15:41.722 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.722 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.722 Namespace ID: 1 size: 0GB 00:15:41.722 Initialization complete. 00:15:41.722 INFO: using host memory buffer for IO 00:15:41.722 Hello world! 00:15:41.722 [2024-12-09 11:30:33.737597] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.722 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.984 [2024-12-09 11:30:34.026285] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.369 Initializing NVMe Controllers 00:15:43.369 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.369 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.369 Initialization complete. Launching workers. 00:15:43.370 submit (in ns) avg, min, max = 8735.6, 3905.0, 3999503.3 00:15:43.370 complete (in ns) avg, min, max = 18726.2, 2421.7, 3999900.8 00:15:43.370 00:15:43.370 Submit histogram 00:15:43.370 ================ 00:15:43.370 Range in us Cumulative Count 00:15:43.370 3.893 - 3.920: 0.3451% ( 65) 00:15:43.370 3.920 - 3.947: 3.6528% ( 623) 00:15:43.370 3.947 - 3.973: 10.0717% ( 1209) 00:15:43.370 3.973 - 4.000: 19.8779% ( 1847) 00:15:43.370 4.000 - 4.027: 31.2185% ( 2136) 00:15:43.370 4.027 - 4.053: 43.6899% ( 2349) 00:15:43.370 4.053 - 4.080: 58.6461% ( 2817) 00:15:43.370 4.080 - 4.107: 74.4784% ( 2982) 00:15:43.370 4.107 - 4.133: 87.6560% ( 2482) 00:15:43.370 4.133 - 4.160: 94.9243% ( 1369) 00:15:43.370 4.160 - 4.187: 97.9294% ( 566) 00:15:43.370 4.187 - 4.213: 99.0125% ( 204) 00:15:43.370 4.213 - 4.240: 99.4001% ( 73) 00:15:43.370 4.240 - 4.267: 99.4744% ( 14) 00:15:43.370 4.267 - 4.293: 99.4956% ( 4) 00:15:43.370 4.347 - 4.373: 99.5009% ( 1) 00:15:43.370 4.693 - 4.720: 99.5062% ( 1) 00:15:43.370 4.773 - 4.800: 99.5115% ( 1) 00:15:43.370 4.800 - 4.827: 99.5169% ( 1) 00:15:43.370 5.333 - 5.360: 99.5222% ( 1) 00:15:43.370 5.760 - 5.787: 99.5275% ( 1) 00:15:43.370 5.840 - 5.867: 99.5381% ( 2) 00:15:43.370 5.867 - 5.893: 99.5434% ( 1) 00:15:43.370 5.947 - 5.973: 99.5487% ( 1) 00:15:43.370 6.027 - 6.053: 99.5540% ( 1) 00:15:43.370 6.080 - 6.107: 99.5593% ( 1) 00:15:43.370 6.133 - 6.160: 99.5753% ( 3) 00:15:43.370 6.187 - 6.213: 99.5806% ( 1) 00:15:43.370 6.240 - 6.267: 99.5912% ( 2) 00:15:43.370 6.267 - 6.293: 99.6018% ( 2) 00:15:43.370 6.293 - 6.320: 99.6124% ( 2) 00:15:43.370 6.347 - 6.373: 99.6230% ( 2) 00:15:43.370 6.427 - 6.453: 99.6284% ( 1) 00:15:43.370 6.453 - 6.480: 99.6337% ( 1) 00:15:43.370 6.480 - 6.507: 99.6443% ( 2) 00:15:43.370 6.533 - 6.560: 99.6602% ( 3) 00:15:43.370 6.587 - 6.613: 99.6655% ( 1) 00:15:43.370 6.613 - 6.640: 99.6708% ( 1) 00:15:43.370 6.640 - 6.667: 99.6868% ( 3) 00:15:43.370 6.667 - 6.693: 99.6921% ( 1) 00:15:43.370 6.693 - 6.720: 99.6974% ( 1) 00:15:43.370 6.720 - 6.747: 99.7133% ( 3) 00:15:43.370 6.747 - 6.773: 99.7239% ( 2) 00:15:43.370 6.773 - 6.800: 99.7292% ( 1) 00:15:43.370 6.800 - 6.827: 99.7345% ( 1) 00:15:43.370 6.827 - 6.880: 99.7452% ( 2) 00:15:43.370 6.880 - 6.933: 99.7505% ( 1) 00:15:43.370 6.933 - 6.987: 99.7717% ( 4) 00:15:43.370 6.987 - 7.040: 99.7929% ( 4) 00:15:43.370 7.040 - 7.093: 99.8142% ( 4) 00:15:43.370 7.093 - 7.147: 99.8195% ( 1) 00:15:43.370 7.147 - 7.200: 99.8248% ( 1) 00:15:43.370 7.200 - 7.253: 99.8301% ( 1) 00:15:43.370 7.253 - 7.307: 99.8407% ( 2) 00:15:43.370 7.307 - 7.360: 99.8513% ( 2) 00:15:43.370 7.467 - 7.520: 99.8566% ( 1) 00:15:43.370 7.573 - 7.627: 99.8620% ( 1) 00:15:43.370 7.680 - 7.733: 99.8673% ( 1) 00:15:43.370 8.053 - 8.107: 99.8726% ( 1) 00:15:43.370 10.560 - 10.613: 99.8779% ( 1) 00:15:43.370 13.973 - 14.080: 99.8832% ( 1) 00:15:43.370 3986.773 - 4014.080: 100.0000% ( 22) 00:15:43.370 00:15:43.370 Complete histogram 00:15:43.370 ================== 00:15:43.370 Range in us Cumulative Count 00:15:43.370 2.413 - 2.427: 1.1149% ( 210) 00:15:43.370 2.427 - 2.440: 6.2384% ( 965) 00:15:43.370 2.440 - 2.453: 6.9604% ( 136) 00:15:43.370 2.453 - 2.467: 10.1619% ( 603) 00:15:43.370 2.467 - 2.480: 49.8221% ( 7470) 00:15:43.370 2.480 - 2.493: 58.3010% ( 1597) 00:15:43.370 2.493 - 2.507: 74.8713% ( 3121) 00:15:43.370 2.507 - 2.520: 80.2708% ( 1017) 00:15:43.370 2.520 - 2.533: 83.4139% ( 592) 00:15:43.370 2.533 - 2.547: 87.0613% ( 687) 00:15:43.370 2.547 - 2.560: 91.9512% ( 921) 00:15:43.370 2.560 - 2.573: 95.2270% ( 617) 00:15:43.370 2.573 - 2.587: 97.6055% ( 448) 00:15:43.370 2.587 - 2.600: 98.7789% ( 221) 00:15:43.370 2.600 - 2.613: 99.1983% ( 79) 00:15:43.370 2.613 - [2024-12-09 11:30:35.132717] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.370 2.627: 99.3257% ( 24) 00:15:43.370 2.627 - 2.640: 99.3523% ( 5) 00:15:43.370 2.640 - 2.653: 99.3629% ( 2) 00:15:43.370 2.693 - 2.707: 99.3682% ( 1) 00:15:43.370 4.533 - 4.560: 99.3735% ( 1) 00:15:43.370 4.560 - 4.587: 99.3788% ( 1) 00:15:43.370 4.587 - 4.613: 99.3841% ( 1) 00:15:43.370 4.640 - 4.667: 99.3947% ( 2) 00:15:43.370 4.827 - 4.853: 99.4001% ( 1) 00:15:43.370 4.907 - 4.933: 99.4107% ( 2) 00:15:43.370 4.933 - 4.960: 99.4213% ( 2) 00:15:43.370 4.960 - 4.987: 99.4266% ( 1) 00:15:43.370 5.013 - 5.040: 99.4319% ( 1) 00:15:43.370 5.067 - 5.093: 99.4425% ( 2) 00:15:43.370 5.093 - 5.120: 99.4531% ( 2) 00:15:43.370 5.147 - 5.173: 99.4585% ( 1) 00:15:43.370 5.173 - 5.200: 99.4638% ( 1) 00:15:43.370 5.200 - 5.227: 99.4744% ( 2) 00:15:43.370 5.307 - 5.333: 99.4797% ( 1) 00:15:43.370 5.360 - 5.387: 99.4903% ( 2) 00:15:43.370 5.413 - 5.440: 99.4956% ( 1) 00:15:43.370 5.440 - 5.467: 99.5009% ( 1) 00:15:43.370 5.493 - 5.520: 99.5062% ( 1) 00:15:43.370 5.520 - 5.547: 99.5115% ( 1) 00:15:43.370 5.573 - 5.600: 99.5222% ( 2) 00:15:43.370 5.627 - 5.653: 99.5275% ( 1) 00:15:43.370 5.707 - 5.733: 99.5328% ( 1) 00:15:43.370 5.760 - 5.787: 99.5434% ( 2) 00:15:43.370 5.787 - 5.813: 99.5540% ( 2) 00:15:43.370 5.813 - 5.840: 99.5593% ( 1) 00:15:43.370 5.867 - 5.893: 99.5646% ( 1) 00:15:43.370 5.947 - 5.973: 99.5699% ( 1) 00:15:43.370 6.213 - 6.240: 99.5753% ( 1) 00:15:43.370 10.240 - 10.293: 99.5806% ( 1) 00:15:43.370 11.467 - 11.520: 99.5859% ( 1) 00:15:43.370 44.373 - 44.587: 99.5912% ( 1) 00:15:43.370 1993.387 - 2007.040: 99.5965% ( 1) 00:15:43.370 3986.773 - 4014.080: 100.0000% ( 76) 00:15:43.370 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.370 [ 00:15:43.370 { 00:15:43.370 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.370 "subtype": "Discovery", 00:15:43.370 "listen_addresses": [], 00:15:43.370 "allow_any_host": true, 00:15:43.370 "hosts": [] 00:15:43.370 }, 00:15:43.370 { 00:15:43.370 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.370 "subtype": "NVMe", 00:15:43.370 "listen_addresses": [ 00:15:43.370 { 00:15:43.370 "trtype": "VFIOUSER", 00:15:43.370 "adrfam": "IPv4", 00:15:43.370 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.370 "trsvcid": "0" 00:15:43.370 } 00:15:43.370 ], 00:15:43.370 "allow_any_host": true, 00:15:43.370 "hosts": [], 00:15:43.370 "serial_number": "SPDK1", 00:15:43.370 "model_number": "SPDK bdev Controller", 00:15:43.370 "max_namespaces": 32, 00:15:43.370 "min_cntlid": 1, 00:15:43.370 "max_cntlid": 65519, 00:15:43.370 "namespaces": [ 00:15:43.370 { 00:15:43.370 "nsid": 1, 00:15:43.370 "bdev_name": "Malloc1", 00:15:43.370 "name": "Malloc1", 00:15:43.370 "nguid": "CA18B2DCF9474868BF2405758D24630F", 00:15:43.370 "uuid": "ca18b2dc-f947-4868-bf24-05758d24630f" 00:15:43.370 }, 00:15:43.370 { 00:15:43.370 "nsid": 2, 00:15:43.370 "bdev_name": "Malloc3", 00:15:43.370 "name": "Malloc3", 00:15:43.370 "nguid": "7DD0DAB3966E41E9B4C84D07D856BD06", 00:15:43.370 "uuid": "7dd0dab3-966e-41e9-b4c8-4d07d856bd06" 00:15:43.370 } 00:15:43.370 ] 00:15:43.370 }, 00:15:43.370 { 00:15:43.370 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.370 "subtype": "NVMe", 00:15:43.370 "listen_addresses": [ 00:15:43.370 { 00:15:43.370 "trtype": "VFIOUSER", 00:15:43.370 "adrfam": "IPv4", 00:15:43.370 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.370 "trsvcid": "0" 00:15:43.370 } 00:15:43.370 ], 00:15:43.370 "allow_any_host": true, 00:15:43.370 "hosts": [], 00:15:43.370 "serial_number": "SPDK2", 00:15:43.370 "model_number": "SPDK bdev Controller", 00:15:43.370 "max_namespaces": 32, 00:15:43.370 "min_cntlid": 1, 00:15:43.370 "max_cntlid": 65519, 00:15:43.370 "namespaces": [ 00:15:43.370 { 00:15:43.370 "nsid": 1, 00:15:43.370 "bdev_name": "Malloc2", 00:15:43.370 "name": "Malloc2", 00:15:43.370 "nguid": "8D6BD9DBF2BE46FEB802D120A8A79A0A", 00:15:43.370 "uuid": "8d6bd9db-f2be-46fe-b802-d120a8a79a0a" 00:15:43.370 } 00:15:43.370 ] 00:15:43.370 } 00:15:43.370 ] 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.370 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3483983 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.371 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.632 Malloc4 00:15:43.632 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.632 [2024-12-09 11:30:35.556969] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.632 [2024-12-09 11:30:35.702946] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.632 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.632 Asynchronous Event Request test 00:15:43.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.632 Registering asynchronous event callbacks... 00:15:43.632 Starting namespace attribute notice tests for all controllers... 00:15:43.632 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.632 aer_cb - Changed Namespace 00:15:43.632 Cleaning up... 00:15:43.894 [ 00:15:43.894 { 00:15:43.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.894 "subtype": "Discovery", 00:15:43.894 "listen_addresses": [], 00:15:43.894 "allow_any_host": true, 00:15:43.894 "hosts": [] 00:15:43.894 }, 00:15:43.894 { 00:15:43.894 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.894 "subtype": "NVMe", 00:15:43.894 "listen_addresses": [ 00:15:43.894 { 00:15:43.894 "trtype": "VFIOUSER", 00:15:43.894 "adrfam": "IPv4", 00:15:43.894 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.894 "trsvcid": "0" 00:15:43.894 } 00:15:43.894 ], 00:15:43.894 "allow_any_host": true, 00:15:43.894 "hosts": [], 00:15:43.894 "serial_number": "SPDK1", 00:15:43.894 "model_number": "SPDK bdev Controller", 00:15:43.894 "max_namespaces": 32, 00:15:43.894 "min_cntlid": 1, 00:15:43.894 "max_cntlid": 65519, 00:15:43.894 "namespaces": [ 00:15:43.894 { 00:15:43.894 "nsid": 1, 00:15:43.894 "bdev_name": "Malloc1", 00:15:43.894 "name": "Malloc1", 00:15:43.894 "nguid": "CA18B2DCF9474868BF2405758D24630F", 00:15:43.894 "uuid": "ca18b2dc-f947-4868-bf24-05758d24630f" 00:15:43.894 }, 00:15:43.894 { 00:15:43.894 "nsid": 2, 00:15:43.894 "bdev_name": "Malloc3", 00:15:43.894 "name": "Malloc3", 00:15:43.894 "nguid": "7DD0DAB3966E41E9B4C84D07D856BD06", 00:15:43.894 "uuid": "7dd0dab3-966e-41e9-b4c8-4d07d856bd06" 00:15:43.894 } 00:15:43.894 ] 00:15:43.894 }, 00:15:43.894 { 00:15:43.894 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.894 "subtype": "NVMe", 00:15:43.894 "listen_addresses": [ 00:15:43.894 { 00:15:43.894 "trtype": "VFIOUSER", 00:15:43.894 "adrfam": "IPv4", 00:15:43.894 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.894 "trsvcid": "0" 00:15:43.894 } 00:15:43.894 ], 00:15:43.894 "allow_any_host": true, 00:15:43.894 "hosts": [], 00:15:43.894 "serial_number": "SPDK2", 00:15:43.894 "model_number": "SPDK bdev Controller", 00:15:43.894 "max_namespaces": 32, 00:15:43.894 "min_cntlid": 1, 00:15:43.894 "max_cntlid": 65519, 00:15:43.894 "namespaces": [ 00:15:43.894 { 00:15:43.894 "nsid": 1, 00:15:43.894 "bdev_name": "Malloc2", 00:15:43.894 "name": "Malloc2", 00:15:43.894 "nguid": "8D6BD9DBF2BE46FEB802D120A8A79A0A", 00:15:43.894 "uuid": "8d6bd9db-f2be-46fe-b802-d120a8a79a0a" 00:15:43.894 }, 00:15:43.894 { 00:15:43.894 "nsid": 2, 00:15:43.894 "bdev_name": "Malloc4", 00:15:43.894 "name": "Malloc4", 00:15:43.894 "nguid": "6B0A653EB865422C80B5FA8A1BF25131", 00:15:43.894 "uuid": "6b0a653e-b865-422c-80b5-fa8a1bf25131" 00:15:43.894 } 00:15:43.894 ] 00:15:43.894 } 00:15:43.894 ] 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3483983 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3474344 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3474344 ']' 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3474344 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3474344 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3474344' 00:15:43.894 killing process with pid 3474344 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3474344 00:15:43.894 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3474344 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3484117 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3484117' 00:15:44.156 Process pid: 3484117 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3484117 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3484117 ']' 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.156 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.156 [2024-12-09 11:30:36.191049] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:44.156 [2024-12-09 11:30:36.191991] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:15:44.156 [2024-12-09 11:30:36.192044] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.156 [2024-12-09 11:30:36.265521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.156 [2024-12-09 11:30:36.301564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.156 [2024-12-09 11:30:36.301598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.156 [2024-12-09 11:30:36.301606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.156 [2024-12-09 11:30:36.301612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.156 [2024-12-09 11:30:36.301618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.156 [2024-12-09 11:30:36.303074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.156 [2024-12-09 11:30:36.303372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.156 [2024-12-09 11:30:36.303511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.156 [2024-12-09 11:30:36.303511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.418 [2024-12-09 11:30:36.359993] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.418 [2024-12-09 11:30:36.360034] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:44.418 [2024-12-09 11:30:36.361021] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:44.418 [2024-12-09 11:30:36.361666] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:44.418 [2024-12-09 11:30:36.361770] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.990 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.990 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:44.990 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:45.932 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:46.192 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:46.193 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:46.193 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.193 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:46.193 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:46.453 Malloc1 00:15:46.454 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:46.454 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:46.714 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.975 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.975 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.975 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:46.975 Malloc2 00:15:46.975 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:47.236 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3484117 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3484117 ']' 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3484117 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.497 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3484117 00:15:47.758 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3484117' 00:15:47.759 killing process with pid 3484117 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3484117 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3484117 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:47.759 00:15:47.759 real 0m51.275s 00:15:47.759 user 3m16.581s 00:15:47.759 sys 0m2.674s 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:47.759 ************************************ 00:15:47.759 END TEST nvmf_vfio_user 00:15:47.759 ************************************ 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.759 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.020 ************************************ 00:15:48.020 START TEST nvmf_vfio_user_nvme_compliance 00:15:48.020 ************************************ 00:15:48.020 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.021 * Looking for test storage... 00:15:48.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.021 --rc genhtml_branch_coverage=1 00:15:48.021 --rc genhtml_function_coverage=1 00:15:48.021 --rc genhtml_legend=1 00:15:48.021 --rc geninfo_all_blocks=1 00:15:48.021 --rc geninfo_unexecuted_blocks=1 00:15:48.021 00:15:48.021 ' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.021 --rc genhtml_branch_coverage=1 00:15:48.021 --rc genhtml_function_coverage=1 00:15:48.021 --rc genhtml_legend=1 00:15:48.021 --rc geninfo_all_blocks=1 00:15:48.021 --rc geninfo_unexecuted_blocks=1 00:15:48.021 00:15:48.021 ' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.021 --rc genhtml_branch_coverage=1 00:15:48.021 --rc genhtml_function_coverage=1 00:15:48.021 --rc genhtml_legend=1 00:15:48.021 --rc geninfo_all_blocks=1 00:15:48.021 --rc geninfo_unexecuted_blocks=1 00:15:48.021 00:15:48.021 ' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.021 --rc genhtml_branch_coverage=1 00:15:48.021 --rc genhtml_function_coverage=1 00:15:48.021 --rc genhtml_legend=1 00:15:48.021 --rc geninfo_all_blocks=1 00:15:48.021 --rc geninfo_unexecuted_blocks=1 00:15:48.021 00:15:48.021 ' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.021 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3484893 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3484893' 00:15:48.022 Process pid: 3484893 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3484893 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3484893 ']' 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.022 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:48.285 [2024-12-09 11:30:40.215227] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:15:48.285 [2024-12-09 11:30:40.215283] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.285 [2024-12-09 11:30:40.287916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.285 [2024-12-09 11:30:40.323527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.285 [2024-12-09 11:30:40.323560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.285 [2024-12-09 11:30:40.323568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.285 [2024-12-09 11:30:40.323575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.285 [2024-12-09 11:30:40.323584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.285 [2024-12-09 11:30:40.324933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.285 [2024-12-09 11:30:40.325063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.285 [2024-12-09 11:30:40.325066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.859 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.859 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:48.859 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 malloc0 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.247 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:50.247 00:15:50.247 00:15:50.247 CUnit - A unit testing framework for C - Version 2.1-3 00:15:50.247 http://cunit.sourceforge.net/ 00:15:50.247 00:15:50.247 00:15:50.247 Suite: nvme_compliance 00:15:50.247 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 11:30:42.286147] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.247 [2024-12-09 11:30:42.287522] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:50.247 [2024-12-09 11:30:42.287534] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:50.247 [2024-12-09 11:30:42.287538] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:50.247 [2024-12-09 11:30:42.289166] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.247 passed 00:15:50.247 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 11:30:42.384758] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.247 [2024-12-09 11:30:42.387770] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.509 passed 00:15:50.509 Test: admin_identify_ns ...[2024-12-09 11:30:42.485269] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.509 [2024-12-09 11:30:42.545042] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:50.509 [2024-12-09 11:30:42.553025] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:50.509 [2024-12-09 11:30:42.574139] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.509 passed 00:15:50.509 Test: admin_get_features_mandatory_features ...[2024-12-09 11:30:42.665724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.770 [2024-12-09 11:30:42.671746] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.770 passed 00:15:50.770 Test: admin_get_features_optional_features ...[2024-12-09 11:30:42.764292] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.770 [2024-12-09 11:30:42.767312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.770 passed 00:15:50.770 Test: admin_set_features_number_of_queues ...[2024-12-09 11:30:42.857426] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.031 [2024-12-09 11:30:42.962129] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.031 passed 00:15:51.031 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 11:30:43.053724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.031 [2024-12-09 11:30:43.056739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.031 passed 00:15:51.031 Test: admin_get_log_page_with_lpo ...[2024-12-09 11:30:43.150864] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.292 [2024-12-09 11:30:43.218024] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:51.292 [2024-12-09 11:30:43.231063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.292 passed 00:15:51.292 Test: fabric_property_get ...[2024-12-09 11:30:43.322689] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.292 [2024-12-09 11:30:43.323934] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:51.292 [2024-12-09 11:30:43.325704] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.292 passed 00:15:51.292 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 11:30:43.421336] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.292 [2024-12-09 11:30:43.422588] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:51.292 [2024-12-09 11:30:43.424357] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.554 passed 00:15:51.554 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 11:30:43.516459] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.554 [2024-12-09 11:30:43.600023] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.554 [2024-12-09 11:30:43.616019] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.554 [2024-12-09 11:30:43.621111] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.554 passed 00:15:51.815 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 11:30:43.716186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.815 [2024-12-09 11:30:43.717432] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:51.815 [2024-12-09 11:30:43.719201] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.815 passed 00:15:51.815 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 11:30:43.812299] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.815 [2024-12-09 11:30:43.888018] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.815 [2024-12-09 11:30:43.912019] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.815 [2024-12-09 11:30:43.917101] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.815 passed 00:15:52.076 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 11:30:44.010119] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.076 [2024-12-09 11:30:44.011366] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:52.076 [2024-12-09 11:30:44.011390] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:52.076 [2024-12-09 11:30:44.014137] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.076 passed 00:15:52.076 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 11:30:44.106253] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.076 [2024-12-09 11:30:44.198023] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:52.076 [2024-12-09 11:30:44.206022] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:52.076 [2024-12-09 11:30:44.214017] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:52.076 [2024-12-09 11:30:44.222021] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:52.337 [2024-12-09 11:30:44.251099] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.337 passed 00:15:52.337 Test: admin_create_io_sq_verify_pc ...[2024-12-09 11:30:44.345054] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.337 [2024-12-09 11:30:44.365025] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:52.337 [2024-12-09 11:30:44.382230] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.337 passed 00:15:52.337 Test: admin_create_io_qp_max_qps ...[2024-12-09 11:30:44.472742] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.724 [2024-12-09 11:30:45.573021] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:53.987 [2024-12-09 11:30:45.968475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.987 passed 00:15:53.987 Test: admin_create_io_sq_shared_cq ...[2024-12-09 11:30:46.062238] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.249 [2024-12-09 11:30:46.194025] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:54.249 [2024-12-09 11:30:46.231068] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.249 passed 00:15:54.249 00:15:54.249 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.249 suites 1 1 n/a 0 0 00:15:54.249 tests 18 18 18 0 0 00:15:54.249 asserts 360 360 360 0 n/a 00:15:54.249 00:15:54.249 Elapsed time = 1.656 seconds 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3484893 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3484893 ']' 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3484893 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3484893 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3484893' 00:15:54.249 killing process with pid 3484893 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3484893 00:15:54.249 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3484893 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:54.511 00:15:54.511 real 0m6.559s 00:15:54.511 user 0m18.674s 00:15:54.511 sys 0m0.518s 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:54.511 ************************************ 00:15:54.511 END TEST nvmf_vfio_user_nvme_compliance 00:15:54.511 ************************************ 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.511 ************************************ 00:15:54.511 START TEST nvmf_vfio_user_fuzz 00:15:54.511 ************************************ 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.511 * Looking for test storage... 00:15:54.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:54.511 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.773 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:54.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.774 --rc genhtml_branch_coverage=1 00:15:54.774 --rc genhtml_function_coverage=1 00:15:54.774 --rc genhtml_legend=1 00:15:54.774 --rc geninfo_all_blocks=1 00:15:54.774 --rc geninfo_unexecuted_blocks=1 00:15:54.774 00:15:54.774 ' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:54.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.774 --rc genhtml_branch_coverage=1 00:15:54.774 --rc genhtml_function_coverage=1 00:15:54.774 --rc genhtml_legend=1 00:15:54.774 --rc geninfo_all_blocks=1 00:15:54.774 --rc geninfo_unexecuted_blocks=1 00:15:54.774 00:15:54.774 ' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:54.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.774 --rc genhtml_branch_coverage=1 00:15:54.774 --rc genhtml_function_coverage=1 00:15:54.774 --rc genhtml_legend=1 00:15:54.774 --rc geninfo_all_blocks=1 00:15:54.774 --rc geninfo_unexecuted_blocks=1 00:15:54.774 00:15:54.774 ' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:54.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.774 --rc genhtml_branch_coverage=1 00:15:54.774 --rc genhtml_function_coverage=1 00:15:54.774 --rc genhtml_legend=1 00:15:54.774 --rc geninfo_all_blocks=1 00:15:54.774 --rc geninfo_unexecuted_blocks=1 00:15:54.774 00:15:54.774 ' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3486277 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3486277' 00:15:54.774 Process pid: 3486277 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3486277 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3486277 ']' 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.774 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.719 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.719 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:55.719 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.661 malloc0 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:56.661 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:28.778 Fuzzing completed. Shutting down the fuzz application 00:16:28.778 00:16:28.778 Dumping successful admin opcodes: 00:16:28.778 9, 10, 00:16:28.778 Dumping successful io opcodes: 00:16:28.778 0, 00:16:28.778 NS: 0x20000081ef00 I/O qp, Total commands completed: 1179132, total successful commands: 4634, random_seed: 2856634688 00:16:28.778 NS: 0x20000081ef00 admin qp, Total commands completed: 149248, total successful commands: 32, random_seed: 1154514624 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3486277 ']' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3486277' 00:16:28.778 killing process with pid 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3486277 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:28.778 00:16:28.778 real 0m33.830s 00:16:28.778 user 0m40.735s 00:16:28.778 sys 0m23.279s 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.778 ************************************ 00:16:28.778 END TEST nvmf_vfio_user_fuzz 00:16:28.778 ************************************ 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.778 ************************************ 00:16:28.778 START TEST nvmf_auth_target 00:16:28.778 ************************************ 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:28.778 * Looking for test storage... 00:16:28.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.778 --rc genhtml_branch_coverage=1 00:16:28.778 --rc genhtml_function_coverage=1 00:16:28.778 --rc genhtml_legend=1 00:16:28.778 --rc geninfo_all_blocks=1 00:16:28.778 --rc geninfo_unexecuted_blocks=1 00:16:28.778 00:16:28.778 ' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.778 --rc genhtml_branch_coverage=1 00:16:28.778 --rc genhtml_function_coverage=1 00:16:28.778 --rc genhtml_legend=1 00:16:28.778 --rc geninfo_all_blocks=1 00:16:28.778 --rc geninfo_unexecuted_blocks=1 00:16:28.778 00:16:28.778 ' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.778 --rc genhtml_branch_coverage=1 00:16:28.778 --rc genhtml_function_coverage=1 00:16:28.778 --rc genhtml_legend=1 00:16:28.778 --rc geninfo_all_blocks=1 00:16:28.778 --rc geninfo_unexecuted_blocks=1 00:16:28.778 00:16:28.778 ' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.778 --rc genhtml_branch_coverage=1 00:16:28.778 --rc genhtml_function_coverage=1 00:16:28.778 --rc genhtml_legend=1 00:16:28.778 --rc geninfo_all_blocks=1 00:16:28.778 --rc geninfo_unexecuted_blocks=1 00:16:28.778 00:16:28.778 ' 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.778 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.779 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:36.924 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:36.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:36.924 Found net devices under 0000:31:00.0: cvl_0_0 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:36.924 Found net devices under 0000:31:00.1: cvl_0_1 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.924 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.925 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:16:36.925 00:16:36.925 --- 10.0.0.2 ping statistics --- 00:16:36.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.925 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:16:36.925 00:16:36.925 --- 10.0.0.1 ping statistics --- 00:16:36.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.925 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3496662 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3496662 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3496662 ']' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.925 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3496963 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b15e8facbff7fb610f118008decc2f18fde2126b551d3b7 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gHL 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b15e8facbff7fb610f118008decc2f18fde2126b551d3b7 0 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b15e8facbff7fb610f118008decc2f18fde2126b551d3b7 0 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b15e8facbff7fb610f118008decc2f18fde2126b551d3b7 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:36.925 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gHL 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gHL 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.gHL 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6a521d1b2790541f6bfd47de4d26a428310926bbea78c37929f503572dddf705 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hwb 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6a521d1b2790541f6bfd47de4d26a428310926bbea78c37929f503572dddf705 3 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6a521d1b2790541f6bfd47de4d26a428310926bbea78c37929f503572dddf705 3 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6a521d1b2790541f6bfd47de4d26a428310926bbea78c37929f503572dddf705 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hwb 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hwb 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hwb 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:37.186 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=132a0915fe9ef4c61ec4ddf79b7b5542 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oj0 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 132a0915fe9ef4c61ec4ddf79b7b5542 1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 132a0915fe9ef4c61ec4ddf79b7b5542 1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=132a0915fe9ef4c61ec4ddf79b7b5542 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oj0 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oj0 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.oj0 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=842c3faf0949e92e32313cd00ac4bcc37e36a80e46414347 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Gjh 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 842c3faf0949e92e32313cd00ac4bcc37e36a80e46414347 2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 842c3faf0949e92e32313cd00ac4bcc37e36a80e46414347 2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=842c3faf0949e92e32313cd00ac4bcc37e36a80e46414347 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Gjh 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Gjh 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Gjh 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=96b7675c9e58d9dd5370c785667e109fadc42d9a5b9cdbf1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Qij 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 96b7675c9e58d9dd5370c785667e109fadc42d9a5b9cdbf1 2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 96b7675c9e58d9dd5370c785667e109fadc42d9a5b9cdbf1 2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=96b7675c9e58d9dd5370c785667e109fadc42d9a5b9cdbf1 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:37.187 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Qij 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Qij 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Qij 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3f964e0bba5ee315cf3695667d15bea6 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ALc 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3f964e0bba5ee315cf3695667d15bea6 1 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3f964e0bba5ee315cf3695667d15bea6 1 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3f964e0bba5ee315cf3695667d15bea6 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ALc 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ALc 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ALc 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eec4f645686ac7ab38ddd37c596b09086fee0cdbf53725be9ab04163a224e394 00:16:37.447 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9kL 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eec4f645686ac7ab38ddd37c596b09086fee0cdbf53725be9ab04163a224e394 3 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eec4f645686ac7ab38ddd37c596b09086fee0cdbf53725be9ab04163a224e394 3 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eec4f645686ac7ab38ddd37c596b09086fee0cdbf53725be9ab04163a224e394 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9kL 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9kL 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.9kL 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3496662 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3496662 ']' 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.448 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3496963 /var/tmp/host.sock 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3496963 ']' 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:37.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gHL 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gHL 00:16:37.708 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gHL 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hwb ]] 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hwb 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hwb 00:16:37.968 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hwb 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oj0 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.oj0 00:16:38.230 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.oj0 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Gjh ]] 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gjh 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gjh 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gjh 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Qij 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Qij 00:16:38.490 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Qij 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ALc ]] 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ALc 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ALc 00:16:38.751 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ALc 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9kL 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9kL 00:16:39.010 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9kL 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.010 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.270 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:39.270 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.270 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.271 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.531 00:16:39.531 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.531 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.531 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.791 { 00:16:39.791 "cntlid": 1, 00:16:39.791 "qid": 0, 00:16:39.791 "state": "enabled", 00:16:39.791 "thread": "nvmf_tgt_poll_group_000", 00:16:39.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:39.791 "listen_address": { 00:16:39.791 "trtype": "TCP", 00:16:39.791 "adrfam": "IPv4", 00:16:39.791 "traddr": "10.0.0.2", 00:16:39.791 "trsvcid": "4420" 00:16:39.791 }, 00:16:39.791 "peer_address": { 00:16:39.791 "trtype": "TCP", 00:16:39.791 "adrfam": "IPv4", 00:16:39.791 "traddr": "10.0.0.1", 00:16:39.791 "trsvcid": "34680" 00:16:39.791 }, 00:16:39.791 "auth": { 00:16:39.791 "state": "completed", 00:16:39.791 "digest": "sha256", 00:16:39.791 "dhgroup": "null" 00:16:39.791 } 00:16:39.791 } 00:16:39.791 ]' 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.791 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.052 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:40.052 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.993 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.993 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.254 00:16:41.254 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.254 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.254 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.515 { 00:16:41.515 "cntlid": 3, 00:16:41.515 "qid": 0, 00:16:41.515 "state": "enabled", 00:16:41.515 "thread": "nvmf_tgt_poll_group_000", 00:16:41.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:41.515 "listen_address": { 00:16:41.515 "trtype": "TCP", 00:16:41.515 "adrfam": "IPv4", 00:16:41.515 "traddr": "10.0.0.2", 00:16:41.515 "trsvcid": "4420" 00:16:41.515 }, 00:16:41.515 "peer_address": { 00:16:41.515 "trtype": "TCP", 00:16:41.515 "adrfam": "IPv4", 00:16:41.515 "traddr": "10.0.0.1", 00:16:41.515 "trsvcid": "34708" 00:16:41.515 }, 00:16:41.515 "auth": { 00:16:41.515 "state": "completed", 00:16:41.515 "digest": "sha256", 00:16:41.515 "dhgroup": "null" 00:16:41.515 } 00:16:41.515 } 00:16:41.515 ]' 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.515 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.775 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:41.775 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:42.345 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.606 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.865 00:16:42.865 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.865 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.865 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.125 { 00:16:43.125 "cntlid": 5, 00:16:43.125 "qid": 0, 00:16:43.125 "state": "enabled", 00:16:43.125 "thread": "nvmf_tgt_poll_group_000", 00:16:43.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:43.125 "listen_address": { 00:16:43.125 "trtype": "TCP", 00:16:43.125 "adrfam": "IPv4", 00:16:43.125 "traddr": "10.0.0.2", 00:16:43.125 "trsvcid": "4420" 00:16:43.125 }, 00:16:43.125 "peer_address": { 00:16:43.125 "trtype": "TCP", 00:16:43.125 "adrfam": "IPv4", 00:16:43.125 "traddr": "10.0.0.1", 00:16:43.125 "trsvcid": "34734" 00:16:43.125 }, 00:16:43.125 "auth": { 00:16:43.125 "state": "completed", 00:16:43.125 "digest": "sha256", 00:16:43.125 "dhgroup": "null" 00:16:43.125 } 00:16:43.125 } 00:16:43.125 ]' 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.125 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.384 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:43.384 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.325 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.586 00:16:44.586 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.586 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.586 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.846 { 00:16:44.846 "cntlid": 7, 00:16:44.846 "qid": 0, 00:16:44.846 "state": "enabled", 00:16:44.846 "thread": "nvmf_tgt_poll_group_000", 00:16:44.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:44.846 "listen_address": { 00:16:44.846 "trtype": "TCP", 00:16:44.846 "adrfam": "IPv4", 00:16:44.846 "traddr": "10.0.0.2", 00:16:44.846 "trsvcid": "4420" 00:16:44.846 }, 00:16:44.846 "peer_address": { 00:16:44.846 "trtype": "TCP", 00:16:44.846 "adrfam": "IPv4", 00:16:44.846 "traddr": "10.0.0.1", 00:16:44.846 "trsvcid": "34768" 00:16:44.846 }, 00:16:44.846 "auth": { 00:16:44.846 "state": "completed", 00:16:44.846 "digest": "sha256", 00:16:44.846 "dhgroup": "null" 00:16:44.846 } 00:16:44.846 } 00:16:44.846 ]' 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.846 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.106 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:45.106 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.048 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.048 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.049 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.049 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.309 00:16:46.309 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.309 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.309 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.571 { 00:16:46.571 "cntlid": 9, 00:16:46.571 "qid": 0, 00:16:46.571 "state": "enabled", 00:16:46.571 "thread": "nvmf_tgt_poll_group_000", 00:16:46.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.571 "listen_address": { 00:16:46.571 "trtype": "TCP", 00:16:46.571 "adrfam": "IPv4", 00:16:46.571 "traddr": "10.0.0.2", 00:16:46.571 "trsvcid": "4420" 00:16:46.571 }, 00:16:46.571 "peer_address": { 00:16:46.571 "trtype": "TCP", 00:16:46.571 "adrfam": "IPv4", 00:16:46.571 "traddr": "10.0.0.1", 00:16:46.571 "trsvcid": "34798" 00:16:46.571 }, 00:16:46.571 "auth": { 00:16:46.571 "state": "completed", 00:16:46.571 "digest": "sha256", 00:16:46.571 "dhgroup": "ffdhe2048" 00:16:46.571 } 00:16:46.571 } 00:16:46.571 ]' 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.571 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.833 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:46.833 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.778 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.040 00:16:48.040 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.040 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.040 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.301 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.301 { 00:16:48.301 "cntlid": 11, 00:16:48.301 "qid": 0, 00:16:48.301 "state": "enabled", 00:16:48.301 "thread": "nvmf_tgt_poll_group_000", 00:16:48.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:48.302 "listen_address": { 00:16:48.302 "trtype": "TCP", 00:16:48.302 "adrfam": "IPv4", 00:16:48.302 "traddr": "10.0.0.2", 00:16:48.302 "trsvcid": "4420" 00:16:48.302 }, 00:16:48.302 "peer_address": { 00:16:48.302 "trtype": "TCP", 00:16:48.302 "adrfam": "IPv4", 00:16:48.302 "traddr": "10.0.0.1", 00:16:48.302 "trsvcid": "34832" 00:16:48.302 }, 00:16:48.302 "auth": { 00:16:48.302 "state": "completed", 00:16:48.302 "digest": "sha256", 00:16:48.302 "dhgroup": "ffdhe2048" 00:16:48.302 } 00:16:48.302 } 00:16:48.302 ]' 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.302 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.563 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:48.563 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.509 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.770 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.771 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.032 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.032 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.032 { 00:16:50.032 "cntlid": 13, 00:16:50.032 "qid": 0, 00:16:50.032 "state": "enabled", 00:16:50.032 "thread": "nvmf_tgt_poll_group_000", 00:16:50.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:50.032 "listen_address": { 00:16:50.032 "trtype": "TCP", 00:16:50.032 "adrfam": "IPv4", 00:16:50.032 "traddr": "10.0.0.2", 00:16:50.032 "trsvcid": "4420" 00:16:50.032 }, 00:16:50.032 "peer_address": { 00:16:50.032 "trtype": "TCP", 00:16:50.032 "adrfam": "IPv4", 00:16:50.032 "traddr": "10.0.0.1", 00:16:50.032 "trsvcid": "50430" 00:16:50.032 }, 00:16:50.032 "auth": { 00:16:50.032 "state": "completed", 00:16:50.032 "digest": "sha256", 00:16:50.032 "dhgroup": "ffdhe2048" 00:16:50.032 } 00:16:50.032 } 00:16:50.032 ]' 00:16:50.032 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.032 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.032 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.032 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.032 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.032 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.032 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.032 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.294 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:50.294 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.867 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.127 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.387 00:16:51.387 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.387 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.387 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.648 { 00:16:51.648 "cntlid": 15, 00:16:51.648 "qid": 0, 00:16:51.648 "state": "enabled", 00:16:51.648 "thread": "nvmf_tgt_poll_group_000", 00:16:51.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.648 "listen_address": { 00:16:51.648 "trtype": "TCP", 00:16:51.648 "adrfam": "IPv4", 00:16:51.648 "traddr": "10.0.0.2", 00:16:51.648 "trsvcid": "4420" 00:16:51.648 }, 00:16:51.648 "peer_address": { 00:16:51.648 "trtype": "TCP", 00:16:51.648 "adrfam": "IPv4", 00:16:51.648 "traddr": "10.0.0.1", 00:16:51.648 "trsvcid": "50470" 00:16:51.648 }, 00:16:51.648 "auth": { 00:16:51.648 "state": "completed", 00:16:51.648 "digest": "sha256", 00:16:51.648 "dhgroup": "ffdhe2048" 00:16:51.648 } 00:16:51.648 } 00:16:51.648 ]' 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.648 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.910 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:51.910 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.852 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.115 00:16:53.115 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.115 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.115 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.375 { 00:16:53.375 "cntlid": 17, 00:16:53.375 "qid": 0, 00:16:53.375 "state": "enabled", 00:16:53.375 "thread": "nvmf_tgt_poll_group_000", 00:16:53.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:53.375 "listen_address": { 00:16:53.375 "trtype": "TCP", 00:16:53.375 "adrfam": "IPv4", 00:16:53.375 "traddr": "10.0.0.2", 00:16:53.375 "trsvcid": "4420" 00:16:53.375 }, 00:16:53.375 "peer_address": { 00:16:53.375 "trtype": "TCP", 00:16:53.375 "adrfam": "IPv4", 00:16:53.375 "traddr": "10.0.0.1", 00:16:53.375 "trsvcid": "50494" 00:16:53.375 }, 00:16:53.375 "auth": { 00:16:53.375 "state": "completed", 00:16:53.375 "digest": "sha256", 00:16:53.375 "dhgroup": "ffdhe3072" 00:16:53.375 } 00:16:53.375 } 00:16:53.375 ]' 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.375 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.636 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:53.636 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.580 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.841 00:16:54.841 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.841 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.841 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.102 { 00:16:55.102 "cntlid": 19, 00:16:55.102 "qid": 0, 00:16:55.102 "state": "enabled", 00:16:55.102 "thread": "nvmf_tgt_poll_group_000", 00:16:55.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:55.102 "listen_address": { 00:16:55.102 "trtype": "TCP", 00:16:55.102 "adrfam": "IPv4", 00:16:55.102 "traddr": "10.0.0.2", 00:16:55.102 "trsvcid": "4420" 00:16:55.102 }, 00:16:55.102 "peer_address": { 00:16:55.102 "trtype": "TCP", 00:16:55.102 "adrfam": "IPv4", 00:16:55.102 "traddr": "10.0.0.1", 00:16:55.102 "trsvcid": "50514" 00:16:55.102 }, 00:16:55.102 "auth": { 00:16:55.102 "state": "completed", 00:16:55.102 "digest": "sha256", 00:16:55.102 "dhgroup": "ffdhe3072" 00:16:55.102 } 00:16:55.102 } 00:16:55.102 ]' 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.102 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.364 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:55.364 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.308 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.569 00:16:56.569 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.569 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.569 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.831 { 00:16:56.831 "cntlid": 21, 00:16:56.831 "qid": 0, 00:16:56.831 "state": "enabled", 00:16:56.831 "thread": "nvmf_tgt_poll_group_000", 00:16:56.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:56.831 "listen_address": { 00:16:56.831 "trtype": "TCP", 00:16:56.831 "adrfam": "IPv4", 00:16:56.831 "traddr": "10.0.0.2", 00:16:56.831 "trsvcid": "4420" 00:16:56.831 }, 00:16:56.831 "peer_address": { 00:16:56.831 "trtype": "TCP", 00:16:56.831 "adrfam": "IPv4", 00:16:56.831 "traddr": "10.0.0.1", 00:16:56.831 "trsvcid": "50538" 00:16:56.831 }, 00:16:56.831 "auth": { 00:16:56.831 "state": "completed", 00:16:56.831 "digest": "sha256", 00:16:56.831 "dhgroup": "ffdhe3072" 00:16:56.831 } 00:16:56.831 } 00:16:56.831 ]' 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.831 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.092 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:57.092 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.663 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.924 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.185 00:16:58.185 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.185 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.185 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.445 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.445 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.445 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.446 { 00:16:58.446 "cntlid": 23, 00:16:58.446 "qid": 0, 00:16:58.446 "state": "enabled", 00:16:58.446 "thread": "nvmf_tgt_poll_group_000", 00:16:58.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:58.446 "listen_address": { 00:16:58.446 "trtype": "TCP", 00:16:58.446 "adrfam": "IPv4", 00:16:58.446 "traddr": "10.0.0.2", 00:16:58.446 "trsvcid": "4420" 00:16:58.446 }, 00:16:58.446 "peer_address": { 00:16:58.446 "trtype": "TCP", 00:16:58.446 "adrfam": "IPv4", 00:16:58.446 "traddr": "10.0.0.1", 00:16:58.446 "trsvcid": "50548" 00:16:58.446 }, 00:16:58.446 "auth": { 00:16:58.446 "state": "completed", 00:16:58.446 "digest": "sha256", 00:16:58.446 "dhgroup": "ffdhe3072" 00:16:58.446 } 00:16:58.446 } 00:16:58.446 ]' 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.446 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.706 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:58.706 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.648 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.909 00:16:59.909 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.909 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.909 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.170 { 00:17:00.170 "cntlid": 25, 00:17:00.170 "qid": 0, 00:17:00.170 "state": "enabled", 00:17:00.170 "thread": "nvmf_tgt_poll_group_000", 00:17:00.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:00.170 "listen_address": { 00:17:00.170 "trtype": "TCP", 00:17:00.170 "adrfam": "IPv4", 00:17:00.170 "traddr": "10.0.0.2", 00:17:00.170 "trsvcid": "4420" 00:17:00.170 }, 00:17:00.170 "peer_address": { 00:17:00.170 "trtype": "TCP", 00:17:00.170 "adrfam": "IPv4", 00:17:00.170 "traddr": "10.0.0.1", 00:17:00.170 "trsvcid": "52268" 00:17:00.170 }, 00:17:00.170 "auth": { 00:17:00.170 "state": "completed", 00:17:00.170 "digest": "sha256", 00:17:00.170 "dhgroup": "ffdhe4096" 00:17:00.170 } 00:17:00.170 } 00:17:00.170 ]' 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.170 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.431 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:00.431 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.373 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.633 00:17:01.633 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.633 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.633 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.895 { 00:17:01.895 "cntlid": 27, 00:17:01.895 "qid": 0, 00:17:01.895 "state": "enabled", 00:17:01.895 "thread": "nvmf_tgt_poll_group_000", 00:17:01.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:01.895 "listen_address": { 00:17:01.895 "trtype": "TCP", 00:17:01.895 "adrfam": "IPv4", 00:17:01.895 "traddr": "10.0.0.2", 00:17:01.895 "trsvcid": "4420" 00:17:01.895 }, 00:17:01.895 "peer_address": { 00:17:01.895 "trtype": "TCP", 00:17:01.895 "adrfam": "IPv4", 00:17:01.895 "traddr": "10.0.0.1", 00:17:01.895 "trsvcid": "52298" 00:17:01.895 }, 00:17:01.895 "auth": { 00:17:01.895 "state": "completed", 00:17:01.895 "digest": "sha256", 00:17:01.895 "dhgroup": "ffdhe4096" 00:17:01.895 } 00:17:01.895 } 00:17:01.895 ]' 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.895 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.895 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.895 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.895 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.156 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:02.156 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:03.109 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.109 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.109 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.109 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.109 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.370 00:17:03.370 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.370 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.370 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.631 { 00:17:03.631 "cntlid": 29, 00:17:03.631 "qid": 0, 00:17:03.631 "state": "enabled", 00:17:03.631 "thread": "nvmf_tgt_poll_group_000", 00:17:03.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:03.631 "listen_address": { 00:17:03.631 "trtype": "TCP", 00:17:03.631 "adrfam": "IPv4", 00:17:03.631 "traddr": "10.0.0.2", 00:17:03.631 "trsvcid": "4420" 00:17:03.631 }, 00:17:03.631 "peer_address": { 00:17:03.631 "trtype": "TCP", 00:17:03.631 "adrfam": "IPv4", 00:17:03.631 "traddr": "10.0.0.1", 00:17:03.631 "trsvcid": "52336" 00:17:03.631 }, 00:17:03.631 "auth": { 00:17:03.631 "state": "completed", 00:17:03.631 "digest": "sha256", 00:17:03.631 "dhgroup": "ffdhe4096" 00:17:03.631 } 00:17:03.631 } 00:17:03.631 ]' 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.631 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.892 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.892 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.892 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.892 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:03.892 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.833 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.094 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.355 { 00:17:05.355 "cntlid": 31, 00:17:05.355 "qid": 0, 00:17:05.355 "state": "enabled", 00:17:05.355 "thread": "nvmf_tgt_poll_group_000", 00:17:05.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:05.355 "listen_address": { 00:17:05.355 "trtype": "TCP", 00:17:05.355 "adrfam": "IPv4", 00:17:05.355 "traddr": "10.0.0.2", 00:17:05.355 "trsvcid": "4420" 00:17:05.355 }, 00:17:05.355 "peer_address": { 00:17:05.355 "trtype": "TCP", 00:17:05.355 "adrfam": "IPv4", 00:17:05.355 "traddr": "10.0.0.1", 00:17:05.355 "trsvcid": "52352" 00:17:05.355 }, 00:17:05.355 "auth": { 00:17:05.355 "state": "completed", 00:17:05.355 "digest": "sha256", 00:17:05.355 "dhgroup": "ffdhe4096" 00:17:05.355 } 00:17:05.355 } 00:17:05.355 ]' 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.355 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:05.616 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:06.560 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.560 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.561 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.132 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.132 { 00:17:07.132 "cntlid": 33, 00:17:07.132 "qid": 0, 00:17:07.132 "state": "enabled", 00:17:07.132 "thread": "nvmf_tgt_poll_group_000", 00:17:07.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:07.132 "listen_address": { 00:17:07.132 "trtype": "TCP", 00:17:07.132 "adrfam": "IPv4", 00:17:07.132 "traddr": "10.0.0.2", 00:17:07.132 "trsvcid": "4420" 00:17:07.132 }, 00:17:07.132 "peer_address": { 00:17:07.132 "trtype": "TCP", 00:17:07.132 "adrfam": "IPv4", 00:17:07.132 "traddr": "10.0.0.1", 00:17:07.132 "trsvcid": "52382" 00:17:07.132 }, 00:17:07.132 "auth": { 00:17:07.132 "state": "completed", 00:17:07.132 "digest": "sha256", 00:17:07.132 "dhgroup": "ffdhe6144" 00:17:07.132 } 00:17:07.132 } 00:17:07.132 ]' 00:17:07.132 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.394 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.655 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:07.656 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.230 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.491 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.063 00:17:09.063 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.063 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.063 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.063 { 00:17:09.063 "cntlid": 35, 00:17:09.063 "qid": 0, 00:17:09.063 "state": "enabled", 00:17:09.063 "thread": "nvmf_tgt_poll_group_000", 00:17:09.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:09.063 "listen_address": { 00:17:09.063 "trtype": "TCP", 00:17:09.063 "adrfam": "IPv4", 00:17:09.063 "traddr": "10.0.0.2", 00:17:09.063 "trsvcid": "4420" 00:17:09.063 }, 00:17:09.063 "peer_address": { 00:17:09.063 "trtype": "TCP", 00:17:09.063 "adrfam": "IPv4", 00:17:09.063 "traddr": "10.0.0.1", 00:17:09.063 "trsvcid": "46056" 00:17:09.063 }, 00:17:09.063 "auth": { 00:17:09.063 "state": "completed", 00:17:09.063 "digest": "sha256", 00:17:09.063 "dhgroup": "ffdhe6144" 00:17:09.063 } 00:17:09.063 } 00:17:09.063 ]' 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.063 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:09.324 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.265 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.835 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.835 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.836 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.836 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.836 { 00:17:10.836 "cntlid": 37, 00:17:10.836 "qid": 0, 00:17:10.836 "state": "enabled", 00:17:10.836 "thread": "nvmf_tgt_poll_group_000", 00:17:10.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:10.836 "listen_address": { 00:17:10.836 "trtype": "TCP", 00:17:10.836 "adrfam": "IPv4", 00:17:10.836 "traddr": "10.0.0.2", 00:17:10.836 "trsvcid": "4420" 00:17:10.836 }, 00:17:10.836 "peer_address": { 00:17:10.836 "trtype": "TCP", 00:17:10.836 "adrfam": "IPv4", 00:17:10.836 "traddr": "10.0.0.1", 00:17:10.836 "trsvcid": "46078" 00:17:10.836 }, 00:17:10.836 "auth": { 00:17:10.836 "state": "completed", 00:17:10.836 "digest": "sha256", 00:17:10.836 "dhgroup": "ffdhe6144" 00:17:10.836 } 00:17:10.836 } 00:17:10.836 ]' 00:17:10.836 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.096 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.097 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.357 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:11.357 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:11.928 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.928 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.928 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.189 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.449 00:17:12.449 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.449 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.449 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.710 { 00:17:12.710 "cntlid": 39, 00:17:12.710 "qid": 0, 00:17:12.710 "state": "enabled", 00:17:12.710 "thread": "nvmf_tgt_poll_group_000", 00:17:12.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:12.710 "listen_address": { 00:17:12.710 "trtype": "TCP", 00:17:12.710 "adrfam": "IPv4", 00:17:12.710 "traddr": "10.0.0.2", 00:17:12.710 "trsvcid": "4420" 00:17:12.710 }, 00:17:12.710 "peer_address": { 00:17:12.710 "trtype": "TCP", 00:17:12.710 "adrfam": "IPv4", 00:17:12.710 "traddr": "10.0.0.1", 00:17:12.710 "trsvcid": "46090" 00:17:12.710 }, 00:17:12.710 "auth": { 00:17:12.710 "state": "completed", 00:17:12.710 "digest": "sha256", 00:17:12.710 "dhgroup": "ffdhe6144" 00:17:12.710 } 00:17:12.710 } 00:17:12.710 ]' 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.710 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.970 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.971 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.971 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.971 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:12.971 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:13.540 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.801 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.801 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.801 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.801 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.801 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.802 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.374 00:17:14.374 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.374 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.374 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.637 { 00:17:14.637 "cntlid": 41, 00:17:14.637 "qid": 0, 00:17:14.637 "state": "enabled", 00:17:14.637 "thread": "nvmf_tgt_poll_group_000", 00:17:14.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:14.637 "listen_address": { 00:17:14.637 "trtype": "TCP", 00:17:14.637 "adrfam": "IPv4", 00:17:14.637 "traddr": "10.0.0.2", 00:17:14.637 "trsvcid": "4420" 00:17:14.637 }, 00:17:14.637 "peer_address": { 00:17:14.637 "trtype": "TCP", 00:17:14.637 "adrfam": "IPv4", 00:17:14.637 "traddr": "10.0.0.1", 00:17:14.637 "trsvcid": "46128" 00:17:14.637 }, 00:17:14.637 "auth": { 00:17:14.637 "state": "completed", 00:17:14.637 "digest": "sha256", 00:17:14.637 "dhgroup": "ffdhe8192" 00:17:14.637 } 00:17:14.637 } 00:17:14.637 ]' 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.637 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.899 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:14.899 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.840 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.410 00:17:16.410 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.410 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.410 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.672 { 00:17:16.672 "cntlid": 43, 00:17:16.672 "qid": 0, 00:17:16.672 "state": "enabled", 00:17:16.672 "thread": "nvmf_tgt_poll_group_000", 00:17:16.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:16.672 "listen_address": { 00:17:16.672 "trtype": "TCP", 00:17:16.672 "adrfam": "IPv4", 00:17:16.672 "traddr": "10.0.0.2", 00:17:16.672 "trsvcid": "4420" 00:17:16.672 }, 00:17:16.672 "peer_address": { 00:17:16.672 "trtype": "TCP", 00:17:16.672 "adrfam": "IPv4", 00:17:16.672 "traddr": "10.0.0.1", 00:17:16.672 "trsvcid": "46160" 00:17:16.672 }, 00:17:16.672 "auth": { 00:17:16.672 "state": "completed", 00:17:16.672 "digest": "sha256", 00:17:16.672 "dhgroup": "ffdhe8192" 00:17:16.672 } 00:17:16.672 } 00:17:16.672 ]' 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.672 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.932 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:16.932 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.873 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.444 00:17:18.444 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.444 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.444 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.705 { 00:17:18.705 "cntlid": 45, 00:17:18.705 "qid": 0, 00:17:18.705 "state": "enabled", 00:17:18.705 "thread": "nvmf_tgt_poll_group_000", 00:17:18.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.705 "listen_address": { 00:17:18.705 "trtype": "TCP", 00:17:18.705 "adrfam": "IPv4", 00:17:18.705 "traddr": "10.0.0.2", 00:17:18.705 "trsvcid": "4420" 00:17:18.705 }, 00:17:18.705 "peer_address": { 00:17:18.705 "trtype": "TCP", 00:17:18.705 "adrfam": "IPv4", 00:17:18.705 "traddr": "10.0.0.1", 00:17:18.705 "trsvcid": "46180" 00:17:18.705 }, 00:17:18.705 "auth": { 00:17:18.705 "state": "completed", 00:17:18.705 "digest": "sha256", 00:17:18.705 "dhgroup": "ffdhe8192" 00:17:18.705 } 00:17:18.705 } 00:17:18.705 ]' 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.705 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.965 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:18.965 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.907 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.478 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.478 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.739 { 00:17:20.739 "cntlid": 47, 00:17:20.739 "qid": 0, 00:17:20.739 "state": "enabled", 00:17:20.739 "thread": "nvmf_tgt_poll_group_000", 00:17:20.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.739 "listen_address": { 00:17:20.739 "trtype": "TCP", 00:17:20.739 "adrfam": "IPv4", 00:17:20.739 "traddr": "10.0.0.2", 00:17:20.739 "trsvcid": "4420" 00:17:20.739 }, 00:17:20.739 "peer_address": { 00:17:20.739 "trtype": "TCP", 00:17:20.739 "adrfam": "IPv4", 00:17:20.739 "traddr": "10.0.0.1", 00:17:20.739 "trsvcid": "46388" 00:17:20.739 }, 00:17:20.739 "auth": { 00:17:20.739 "state": "completed", 00:17:20.739 "digest": "sha256", 00:17:20.739 "dhgroup": "ffdhe8192" 00:17:20.739 } 00:17:20.739 } 00:17:20.739 ]' 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.739 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.999 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:20.999 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.571 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.832 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.093 00:17:22.093 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.093 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.093 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.353 { 00:17:22.353 "cntlid": 49, 00:17:22.353 "qid": 0, 00:17:22.353 "state": "enabled", 00:17:22.353 "thread": "nvmf_tgt_poll_group_000", 00:17:22.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:22.353 "listen_address": { 00:17:22.353 "trtype": "TCP", 00:17:22.353 "adrfam": "IPv4", 00:17:22.353 "traddr": "10.0.0.2", 00:17:22.353 "trsvcid": "4420" 00:17:22.353 }, 00:17:22.353 "peer_address": { 00:17:22.353 "trtype": "TCP", 00:17:22.353 "adrfam": "IPv4", 00:17:22.353 "traddr": "10.0.0.1", 00:17:22.353 "trsvcid": "46416" 00:17:22.353 }, 00:17:22.353 "auth": { 00:17:22.353 "state": "completed", 00:17:22.353 "digest": "sha384", 00:17:22.353 "dhgroup": "null" 00:17:22.353 } 00:17:22.353 } 00:17:22.353 ]' 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.353 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.354 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.354 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.613 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:22.613 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.554 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.814 00:17:23.814 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.814 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.814 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.075 { 00:17:24.075 "cntlid": 51, 00:17:24.075 "qid": 0, 00:17:24.075 "state": "enabled", 00:17:24.075 "thread": "nvmf_tgt_poll_group_000", 00:17:24.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:24.075 "listen_address": { 00:17:24.075 "trtype": "TCP", 00:17:24.075 "adrfam": "IPv4", 00:17:24.075 "traddr": "10.0.0.2", 00:17:24.075 "trsvcid": "4420" 00:17:24.075 }, 00:17:24.075 "peer_address": { 00:17:24.075 "trtype": "TCP", 00:17:24.075 "adrfam": "IPv4", 00:17:24.075 "traddr": "10.0.0.1", 00:17:24.075 "trsvcid": "46438" 00:17:24.075 }, 00:17:24.075 "auth": { 00:17:24.075 "state": "completed", 00:17:24.075 "digest": "sha384", 00:17:24.075 "dhgroup": "null" 00:17:24.075 } 00:17:24.075 } 00:17:24.075 ]' 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.075 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.336 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:24.336 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.279 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.280 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.541 00:17:25.541 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.541 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.541 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.802 { 00:17:25.802 "cntlid": 53, 00:17:25.802 "qid": 0, 00:17:25.802 "state": "enabled", 00:17:25.802 "thread": "nvmf_tgt_poll_group_000", 00:17:25.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.802 "listen_address": { 00:17:25.802 "trtype": "TCP", 00:17:25.802 "adrfam": "IPv4", 00:17:25.802 "traddr": "10.0.0.2", 00:17:25.802 "trsvcid": "4420" 00:17:25.802 }, 00:17:25.802 "peer_address": { 00:17:25.802 "trtype": "TCP", 00:17:25.802 "adrfam": "IPv4", 00:17:25.802 "traddr": "10.0.0.1", 00:17:25.802 "trsvcid": "46456" 00:17:25.802 }, 00:17:25.802 "auth": { 00:17:25.802 "state": "completed", 00:17:25.802 "digest": "sha384", 00:17:25.802 "dhgroup": "null" 00:17:25.802 } 00:17:25.802 } 00:17:25.802 ]' 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.802 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.063 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:26.063 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.636 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.897 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.159 00:17:27.159 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.159 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.159 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.420 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.421 { 00:17:27.421 "cntlid": 55, 00:17:27.421 "qid": 0, 00:17:27.421 "state": "enabled", 00:17:27.421 "thread": "nvmf_tgt_poll_group_000", 00:17:27.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:27.421 "listen_address": { 00:17:27.421 "trtype": "TCP", 00:17:27.421 "adrfam": "IPv4", 00:17:27.421 "traddr": "10.0.0.2", 00:17:27.421 "trsvcid": "4420" 00:17:27.421 }, 00:17:27.421 "peer_address": { 00:17:27.421 "trtype": "TCP", 00:17:27.421 "adrfam": "IPv4", 00:17:27.421 "traddr": "10.0.0.1", 00:17:27.421 "trsvcid": "46480" 00:17:27.421 }, 00:17:27.421 "auth": { 00:17:27.421 "state": "completed", 00:17:27.421 "digest": "sha384", 00:17:27.421 "dhgroup": "null" 00:17:27.421 } 00:17:27.421 } 00:17:27.421 ]' 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.421 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.682 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:27.682 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.625 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.626 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.886 00:17:28.886 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.887 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.887 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.887 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.887 { 00:17:28.887 "cntlid": 57, 00:17:28.887 "qid": 0, 00:17:28.887 "state": "enabled", 00:17:28.887 "thread": "nvmf_tgt_poll_group_000", 00:17:28.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.887 "listen_address": { 00:17:28.887 "trtype": "TCP", 00:17:28.887 "adrfam": "IPv4", 00:17:28.887 "traddr": "10.0.0.2", 00:17:28.887 "trsvcid": "4420" 00:17:28.887 }, 00:17:28.887 "peer_address": { 00:17:28.887 "trtype": "TCP", 00:17:28.887 "adrfam": "IPv4", 00:17:28.887 "traddr": "10.0.0.1", 00:17:28.887 "trsvcid": "37680" 00:17:28.887 }, 00:17:28.887 "auth": { 00:17:28.887 "state": "completed", 00:17:28.887 "digest": "sha384", 00:17:28.887 "dhgroup": "ffdhe2048" 00:17:28.887 } 00:17:28.887 } 00:17:28.887 ]' 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.147 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.406 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:29.406 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:29.974 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.974 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.974 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.974 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.234 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.495 00:17:30.495 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.495 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.495 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.756 { 00:17:30.756 "cntlid": 59, 00:17:30.756 "qid": 0, 00:17:30.756 "state": "enabled", 00:17:30.756 "thread": "nvmf_tgt_poll_group_000", 00:17:30.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.756 "listen_address": { 00:17:30.756 "trtype": "TCP", 00:17:30.756 "adrfam": "IPv4", 00:17:30.756 "traddr": "10.0.0.2", 00:17:30.756 "trsvcid": "4420" 00:17:30.756 }, 00:17:30.756 "peer_address": { 00:17:30.756 "trtype": "TCP", 00:17:30.756 "adrfam": "IPv4", 00:17:30.756 "traddr": "10.0.0.1", 00:17:30.756 "trsvcid": "37702" 00:17:30.756 }, 00:17:30.756 "auth": { 00:17:30.756 "state": "completed", 00:17:30.756 "digest": "sha384", 00:17:30.756 "dhgroup": "ffdhe2048" 00:17:30.756 } 00:17:30.756 } 00:17:30.756 ]' 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.756 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.015 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:31.015 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.956 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.956 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.216 00:17:32.216 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.216 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.216 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.476 { 00:17:32.476 "cntlid": 61, 00:17:32.476 "qid": 0, 00:17:32.476 "state": "enabled", 00:17:32.476 "thread": "nvmf_tgt_poll_group_000", 00:17:32.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:32.476 "listen_address": { 00:17:32.476 "trtype": "TCP", 00:17:32.476 "adrfam": "IPv4", 00:17:32.476 "traddr": "10.0.0.2", 00:17:32.476 "trsvcid": "4420" 00:17:32.476 }, 00:17:32.476 "peer_address": { 00:17:32.476 "trtype": "TCP", 00:17:32.476 "adrfam": "IPv4", 00:17:32.476 "traddr": "10.0.0.1", 00:17:32.476 "trsvcid": "37724" 00:17:32.476 }, 00:17:32.476 "auth": { 00:17:32.476 "state": "completed", 00:17:32.476 "digest": "sha384", 00:17:32.476 "dhgroup": "ffdhe2048" 00:17:32.476 } 00:17:32.476 } 00:17:32.476 ]' 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.476 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.737 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:32.737 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.678 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.938 00:17:33.938 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.938 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.938 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.199 { 00:17:34.199 "cntlid": 63, 00:17:34.199 "qid": 0, 00:17:34.199 "state": "enabled", 00:17:34.199 "thread": "nvmf_tgt_poll_group_000", 00:17:34.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:34.199 "listen_address": { 00:17:34.199 "trtype": "TCP", 00:17:34.199 "adrfam": "IPv4", 00:17:34.199 "traddr": "10.0.0.2", 00:17:34.199 "trsvcid": "4420" 00:17:34.199 }, 00:17:34.199 "peer_address": { 00:17:34.199 "trtype": "TCP", 00:17:34.199 "adrfam": "IPv4", 00:17:34.199 "traddr": "10.0.0.1", 00:17:34.199 "trsvcid": "37750" 00:17:34.199 }, 00:17:34.199 "auth": { 00:17:34.199 "state": "completed", 00:17:34.199 "digest": "sha384", 00:17:34.199 "dhgroup": "ffdhe2048" 00:17:34.199 } 00:17:34.199 } 00:17:34.199 ]' 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.199 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.459 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:34.459 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.402 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.663 00:17:35.663 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.663 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.663 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.924 { 00:17:35.924 "cntlid": 65, 00:17:35.924 "qid": 0, 00:17:35.924 "state": "enabled", 00:17:35.924 "thread": "nvmf_tgt_poll_group_000", 00:17:35.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:35.924 "listen_address": { 00:17:35.924 "trtype": "TCP", 00:17:35.924 "adrfam": "IPv4", 00:17:35.924 "traddr": "10.0.0.2", 00:17:35.924 "trsvcid": "4420" 00:17:35.924 }, 00:17:35.924 "peer_address": { 00:17:35.924 "trtype": "TCP", 00:17:35.924 "adrfam": "IPv4", 00:17:35.924 "traddr": "10.0.0.1", 00:17:35.924 "trsvcid": "37762" 00:17:35.924 }, 00:17:35.924 "auth": { 00:17:35.924 "state": "completed", 00:17:35.924 "digest": "sha384", 00:17:35.924 "dhgroup": "ffdhe3072" 00:17:35.924 } 00:17:35.924 } 00:17:35.924 ]' 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.924 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.924 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.924 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.924 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.185 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:36.185 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.126 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.126 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.387 00:17:37.387 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.387 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.387 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.648 { 00:17:37.648 "cntlid": 67, 00:17:37.648 "qid": 0, 00:17:37.648 "state": "enabled", 00:17:37.648 "thread": "nvmf_tgt_poll_group_000", 00:17:37.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:37.648 "listen_address": { 00:17:37.648 "trtype": "TCP", 00:17:37.648 "adrfam": "IPv4", 00:17:37.648 "traddr": "10.0.0.2", 00:17:37.648 "trsvcid": "4420" 00:17:37.648 }, 00:17:37.648 "peer_address": { 00:17:37.648 "trtype": "TCP", 00:17:37.648 "adrfam": "IPv4", 00:17:37.648 "traddr": "10.0.0.1", 00:17:37.648 "trsvcid": "37786" 00:17:37.648 }, 00:17:37.648 "auth": { 00:17:37.648 "state": "completed", 00:17:37.648 "digest": "sha384", 00:17:37.648 "dhgroup": "ffdhe3072" 00:17:37.648 } 00:17:37.648 } 00:17:37.648 ]' 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.648 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.909 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:37.909 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.605 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.872 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.138 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.138 { 00:17:39.138 "cntlid": 69, 00:17:39.138 "qid": 0, 00:17:39.138 "state": "enabled", 00:17:39.138 "thread": "nvmf_tgt_poll_group_000", 00:17:39.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:39.138 "listen_address": { 00:17:39.138 "trtype": "TCP", 00:17:39.138 "adrfam": "IPv4", 00:17:39.138 "traddr": "10.0.0.2", 00:17:39.138 "trsvcid": "4420" 00:17:39.138 }, 00:17:39.138 "peer_address": { 00:17:39.138 "trtype": "TCP", 00:17:39.138 "adrfam": "IPv4", 00:17:39.138 "traddr": "10.0.0.1", 00:17:39.138 "trsvcid": "44894" 00:17:39.138 }, 00:17:39.138 "auth": { 00:17:39.138 "state": "completed", 00:17:39.138 "digest": "sha384", 00:17:39.138 "dhgroup": "ffdhe3072" 00:17:39.138 } 00:17:39.138 } 00:17:39.138 ]' 00:17:39.138 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.404 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:39.405 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.360 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.624 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.624 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.889 { 00:17:40.889 "cntlid": 71, 00:17:40.889 "qid": 0, 00:17:40.889 "state": "enabled", 00:17:40.889 "thread": "nvmf_tgt_poll_group_000", 00:17:40.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:40.889 "listen_address": { 00:17:40.889 "trtype": "TCP", 00:17:40.889 "adrfam": "IPv4", 00:17:40.889 "traddr": "10.0.0.2", 00:17:40.889 "trsvcid": "4420" 00:17:40.889 }, 00:17:40.889 "peer_address": { 00:17:40.889 "trtype": "TCP", 00:17:40.889 "adrfam": "IPv4", 00:17:40.889 "traddr": "10.0.0.1", 00:17:40.889 "trsvcid": "44922" 00:17:40.889 }, 00:17:40.889 "auth": { 00:17:40.889 "state": "completed", 00:17:40.889 "digest": "sha384", 00:17:40.889 "dhgroup": "ffdhe3072" 00:17:40.889 } 00:17:40.889 } 00:17:40.889 ]' 00:17:40.889 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.889 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.889 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:41.155 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.140 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.405 00:17:42.405 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.405 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.405 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.672 { 00:17:42.672 "cntlid": 73, 00:17:42.672 "qid": 0, 00:17:42.672 "state": "enabled", 00:17:42.672 "thread": "nvmf_tgt_poll_group_000", 00:17:42.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.672 "listen_address": { 00:17:42.672 "trtype": "TCP", 00:17:42.672 "adrfam": "IPv4", 00:17:42.672 "traddr": "10.0.0.2", 00:17:42.672 "trsvcid": "4420" 00:17:42.672 }, 00:17:42.672 "peer_address": { 00:17:42.672 "trtype": "TCP", 00:17:42.672 "adrfam": "IPv4", 00:17:42.672 "traddr": "10.0.0.1", 00:17:42.672 "trsvcid": "44950" 00:17:42.672 }, 00:17:42.672 "auth": { 00:17:42.672 "state": "completed", 00:17:42.672 "digest": "sha384", 00:17:42.672 "dhgroup": "ffdhe4096" 00:17:42.672 } 00:17:42.672 } 00:17:42.672 ]' 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.672 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.938 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:42.938 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.899 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.170 00:17:44.170 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.170 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.170 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.440 { 00:17:44.440 "cntlid": 75, 00:17:44.440 "qid": 0, 00:17:44.440 "state": "enabled", 00:17:44.440 "thread": "nvmf_tgt_poll_group_000", 00:17:44.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.440 "listen_address": { 00:17:44.440 "trtype": "TCP", 00:17:44.440 "adrfam": "IPv4", 00:17:44.440 "traddr": "10.0.0.2", 00:17:44.440 "trsvcid": "4420" 00:17:44.440 }, 00:17:44.440 "peer_address": { 00:17:44.440 "trtype": "TCP", 00:17:44.440 "adrfam": "IPv4", 00:17:44.440 "traddr": "10.0.0.1", 00:17:44.440 "trsvcid": "44976" 00:17:44.440 }, 00:17:44.440 "auth": { 00:17:44.440 "state": "completed", 00:17:44.440 "digest": "sha384", 00:17:44.440 "dhgroup": "ffdhe4096" 00:17:44.440 } 00:17:44.440 } 00:17:44.440 ]' 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.440 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.441 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.441 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.441 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.441 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.709 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:44.709 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.699 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.971 00:17:45.971 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.971 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.971 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.240 { 00:17:46.240 "cntlid": 77, 00:17:46.240 "qid": 0, 00:17:46.240 "state": "enabled", 00:17:46.240 "thread": "nvmf_tgt_poll_group_000", 00:17:46.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:46.240 "listen_address": { 00:17:46.240 "trtype": "TCP", 00:17:46.240 "adrfam": "IPv4", 00:17:46.240 "traddr": "10.0.0.2", 00:17:46.240 "trsvcid": "4420" 00:17:46.240 }, 00:17:46.240 "peer_address": { 00:17:46.240 "trtype": "TCP", 00:17:46.240 "adrfam": "IPv4", 00:17:46.240 "traddr": "10.0.0.1", 00:17:46.240 "trsvcid": "45002" 00:17:46.240 }, 00:17:46.240 "auth": { 00:17:46.240 "state": "completed", 00:17:46.240 "digest": "sha384", 00:17:46.240 "dhgroup": "ffdhe4096" 00:17:46.240 } 00:17:46.240 } 00:17:46.240 ]' 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.240 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.512 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:46.512 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:47.109 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.381 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.659 00:17:47.659 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.659 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.659 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.931 { 00:17:47.931 "cntlid": 79, 00:17:47.931 "qid": 0, 00:17:47.931 "state": "enabled", 00:17:47.931 "thread": "nvmf_tgt_poll_group_000", 00:17:47.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:47.931 "listen_address": { 00:17:47.931 "trtype": "TCP", 00:17:47.931 "adrfam": "IPv4", 00:17:47.931 "traddr": "10.0.0.2", 00:17:47.931 "trsvcid": "4420" 00:17:47.931 }, 00:17:47.931 "peer_address": { 00:17:47.931 "trtype": "TCP", 00:17:47.931 "adrfam": "IPv4", 00:17:47.931 "traddr": "10.0.0.1", 00:17:47.931 "trsvcid": "45030" 00:17:47.931 }, 00:17:47.931 "auth": { 00:17:47.931 "state": "completed", 00:17:47.931 "digest": "sha384", 00:17:47.931 "dhgroup": "ffdhe4096" 00:17:47.931 } 00:17:47.931 } 00:17:47.931 ]' 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.931 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.931 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.931 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.931 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.932 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.932 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.201 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:48.201 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.797 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.073 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.350 00:17:49.350 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.350 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.350 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.634 { 00:17:49.634 "cntlid": 81, 00:17:49.634 "qid": 0, 00:17:49.634 "state": "enabled", 00:17:49.634 "thread": "nvmf_tgt_poll_group_000", 00:17:49.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:49.634 "listen_address": { 00:17:49.634 "trtype": "TCP", 00:17:49.634 "adrfam": "IPv4", 00:17:49.634 "traddr": "10.0.0.2", 00:17:49.634 "trsvcid": "4420" 00:17:49.634 }, 00:17:49.634 "peer_address": { 00:17:49.634 "trtype": "TCP", 00:17:49.634 "adrfam": "IPv4", 00:17:49.634 "traddr": "10.0.0.1", 00:17:49.634 "trsvcid": "49380" 00:17:49.634 }, 00:17:49.634 "auth": { 00:17:49.634 "state": "completed", 00:17:49.634 "digest": "sha384", 00:17:49.634 "dhgroup": "ffdhe6144" 00:17:49.634 } 00:17:49.634 } 00:17:49.634 ]' 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.634 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.918 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:49.919 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.521 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.795 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.069 00:17:51.069 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.069 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.069 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.339 { 00:17:51.339 "cntlid": 83, 00:17:51.339 "qid": 0, 00:17:51.339 "state": "enabled", 00:17:51.339 "thread": "nvmf_tgt_poll_group_000", 00:17:51.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:51.339 "listen_address": { 00:17:51.339 "trtype": "TCP", 00:17:51.339 "adrfam": "IPv4", 00:17:51.339 "traddr": "10.0.0.2", 00:17:51.339 "trsvcid": "4420" 00:17:51.339 }, 00:17:51.339 "peer_address": { 00:17:51.339 "trtype": "TCP", 00:17:51.339 "adrfam": "IPv4", 00:17:51.339 "traddr": "10.0.0.1", 00:17:51.339 "trsvcid": "49414" 00:17:51.339 }, 00:17:51.339 "auth": { 00:17:51.339 "state": "completed", 00:17:51.339 "digest": "sha384", 00:17:51.339 "dhgroup": "ffdhe6144" 00:17:51.339 } 00:17:51.339 } 00:17:51.339 ]' 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.339 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.603 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.603 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.603 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.603 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:51.603 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.578 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.860 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.178 { 00:17:53.178 "cntlid": 85, 00:17:53.178 "qid": 0, 00:17:53.178 "state": "enabled", 00:17:53.178 "thread": "nvmf_tgt_poll_group_000", 00:17:53.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:53.178 "listen_address": { 00:17:53.178 "trtype": "TCP", 00:17:53.178 "adrfam": "IPv4", 00:17:53.178 "traddr": "10.0.0.2", 00:17:53.178 "trsvcid": "4420" 00:17:53.178 }, 00:17:53.178 "peer_address": { 00:17:53.178 "trtype": "TCP", 00:17:53.178 "adrfam": "IPv4", 00:17:53.178 "traddr": "10.0.0.1", 00:17:53.178 "trsvcid": "49442" 00:17:53.178 }, 00:17:53.178 "auth": { 00:17:53.178 "state": "completed", 00:17:53.178 "digest": "sha384", 00:17:53.178 "dhgroup": "ffdhe6144" 00:17:53.178 } 00:17:53.178 } 00:17:53.178 ]' 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.178 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.447 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.447 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.447 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.447 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:53.447 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.445 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.446 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.727 00:17:54.727 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.727 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.727 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.005 { 00:17:55.005 "cntlid": 87, 00:17:55.005 "qid": 0, 00:17:55.005 "state": "enabled", 00:17:55.005 "thread": "nvmf_tgt_poll_group_000", 00:17:55.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.005 "listen_address": { 00:17:55.005 "trtype": "TCP", 00:17:55.005 "adrfam": "IPv4", 00:17:55.005 "traddr": "10.0.0.2", 00:17:55.005 "trsvcid": "4420" 00:17:55.005 }, 00:17:55.005 "peer_address": { 00:17:55.005 "trtype": "TCP", 00:17:55.005 "adrfam": "IPv4", 00:17:55.005 "traddr": "10.0.0.1", 00:17:55.005 "trsvcid": "49476" 00:17:55.005 }, 00:17:55.005 "auth": { 00:17:55.005 "state": "completed", 00:17:55.005 "digest": "sha384", 00:17:55.005 "dhgroup": "ffdhe6144" 00:17:55.005 } 00:17:55.005 } 00:17:55.005 ]' 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.005 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.281 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.281 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.281 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.281 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:55.281 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.280 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.281 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.912 00:17:56.912 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.912 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.912 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.912 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.912 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.912 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.912 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.192 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.192 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.192 { 00:17:57.192 "cntlid": 89, 00:17:57.192 "qid": 0, 00:17:57.192 "state": "enabled", 00:17:57.192 "thread": "nvmf_tgt_poll_group_000", 00:17:57.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.192 "listen_address": { 00:17:57.192 "trtype": "TCP", 00:17:57.192 "adrfam": "IPv4", 00:17:57.192 "traddr": "10.0.0.2", 00:17:57.192 "trsvcid": "4420" 00:17:57.192 }, 00:17:57.192 "peer_address": { 00:17:57.192 "trtype": "TCP", 00:17:57.192 "adrfam": "IPv4", 00:17:57.192 "traddr": "10.0.0.1", 00:17:57.192 "trsvcid": "49498" 00:17:57.192 }, 00:17:57.192 "auth": { 00:17:57.192 "state": "completed", 00:17:57.192 "digest": "sha384", 00:17:57.192 "dhgroup": "ffdhe8192" 00:17:57.192 } 00:17:57.192 } 00:17:57.192 ]' 00:17:57.192 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.192 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.193 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.469 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:57.469 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.075 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.375 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.984 00:17:58.984 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.984 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.984 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.984 { 00:17:58.984 "cntlid": 91, 00:17:58.984 "qid": 0, 00:17:58.984 "state": "enabled", 00:17:58.984 "thread": "nvmf_tgt_poll_group_000", 00:17:58.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:58.984 "listen_address": { 00:17:58.984 "trtype": "TCP", 00:17:58.984 "adrfam": "IPv4", 00:17:58.984 "traddr": "10.0.0.2", 00:17:58.984 "trsvcid": "4420" 00:17:58.984 }, 00:17:58.984 "peer_address": { 00:17:58.984 "trtype": "TCP", 00:17:58.984 "adrfam": "IPv4", 00:17:58.984 "traddr": "10.0.0.1", 00:17:58.984 "trsvcid": "49520" 00:17:58.984 }, 00:17:58.984 "auth": { 00:17:58.984 "state": "completed", 00:17:58.984 "digest": "sha384", 00:17:58.984 "dhgroup": "ffdhe8192" 00:17:58.984 } 00:17:58.984 } 00:17:58.984 ]' 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.984 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:17:59.282 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.319 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.959 00:18:00.959 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.959 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.959 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.959 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.959 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.960 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.960 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.960 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.960 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.960 { 00:18:00.960 "cntlid": 93, 00:18:00.960 "qid": 0, 00:18:00.960 "state": "enabled", 00:18:00.960 "thread": "nvmf_tgt_poll_group_000", 00:18:00.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:00.960 "listen_address": { 00:18:00.960 "trtype": "TCP", 00:18:00.960 "adrfam": "IPv4", 00:18:00.960 "traddr": "10.0.0.2", 00:18:00.960 "trsvcid": "4420" 00:18:00.960 }, 00:18:00.960 "peer_address": { 00:18:00.960 "trtype": "TCP", 00:18:00.960 "adrfam": "IPv4", 00:18:00.960 "traddr": "10.0.0.1", 00:18:00.960 "trsvcid": "42424" 00:18:00.960 }, 00:18:00.960 "auth": { 00:18:00.960 "state": "completed", 00:18:00.960 "digest": "sha384", 00:18:00.960 "dhgroup": "ffdhe8192" 00:18:00.960 } 00:18:00.960 } 00:18:00.960 ]' 00:18:00.960 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:01.260 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.203 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.464 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.464 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.464 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.464 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.725 00:18:02.986 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.986 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.986 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.986 { 00:18:02.986 "cntlid": 95, 00:18:02.986 "qid": 0, 00:18:02.986 "state": "enabled", 00:18:02.986 "thread": "nvmf_tgt_poll_group_000", 00:18:02.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.986 "listen_address": { 00:18:02.986 "trtype": "TCP", 00:18:02.986 "adrfam": "IPv4", 00:18:02.986 "traddr": "10.0.0.2", 00:18:02.986 "trsvcid": "4420" 00:18:02.986 }, 00:18:02.986 "peer_address": { 00:18:02.986 "trtype": "TCP", 00:18:02.986 "adrfam": "IPv4", 00:18:02.986 "traddr": "10.0.0.1", 00:18:02.986 "trsvcid": "42450" 00:18:02.986 }, 00:18:02.986 "auth": { 00:18:02.986 "state": "completed", 00:18:02.986 "digest": "sha384", 00:18:02.986 "dhgroup": "ffdhe8192" 00:18:02.986 } 00:18:02.986 } 00:18:02.986 ]' 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.986 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:03.247 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:04.187 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.187 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.187 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.187 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.188 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.449 00:18:04.449 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.449 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.449 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.709 { 00:18:04.709 "cntlid": 97, 00:18:04.709 "qid": 0, 00:18:04.709 "state": "enabled", 00:18:04.709 "thread": "nvmf_tgt_poll_group_000", 00:18:04.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:04.709 "listen_address": { 00:18:04.709 "trtype": "TCP", 00:18:04.709 "adrfam": "IPv4", 00:18:04.709 "traddr": "10.0.0.2", 00:18:04.709 "trsvcid": "4420" 00:18:04.709 }, 00:18:04.709 "peer_address": { 00:18:04.709 "trtype": "TCP", 00:18:04.709 "adrfam": "IPv4", 00:18:04.709 "traddr": "10.0.0.1", 00:18:04.709 "trsvcid": "42484" 00:18:04.709 }, 00:18:04.709 "auth": { 00:18:04.709 "state": "completed", 00:18:04.709 "digest": "sha512", 00:18:04.709 "dhgroup": "null" 00:18:04.709 } 00:18:04.709 } 00:18:04.709 ]' 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:04.709 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.970 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.970 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.970 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.970 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:04.970 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.911 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.911 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.912 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.171 00:18:06.171 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.171 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.171 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.431 { 00:18:06.431 "cntlid": 99, 00:18:06.431 "qid": 0, 00:18:06.431 "state": "enabled", 00:18:06.431 "thread": "nvmf_tgt_poll_group_000", 00:18:06.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:06.431 "listen_address": { 00:18:06.431 "trtype": "TCP", 00:18:06.431 "adrfam": "IPv4", 00:18:06.431 "traddr": "10.0.0.2", 00:18:06.431 "trsvcid": "4420" 00:18:06.431 }, 00:18:06.431 "peer_address": { 00:18:06.431 "trtype": "TCP", 00:18:06.431 "adrfam": "IPv4", 00:18:06.431 "traddr": "10.0.0.1", 00:18:06.431 "trsvcid": "42516" 00:18:06.431 }, 00:18:06.431 "auth": { 00:18:06.431 "state": "completed", 00:18:06.431 "digest": "sha512", 00:18:06.431 "dhgroup": "null" 00:18:06.431 } 00:18:06.431 } 00:18:06.431 ]' 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.431 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.690 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.690 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.690 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.690 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:06.690 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.629 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.890 00:18:07.890 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.890 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.890 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.151 { 00:18:08.151 "cntlid": 101, 00:18:08.151 "qid": 0, 00:18:08.151 "state": "enabled", 00:18:08.151 "thread": "nvmf_tgt_poll_group_000", 00:18:08.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:08.151 "listen_address": { 00:18:08.151 "trtype": "TCP", 00:18:08.151 "adrfam": "IPv4", 00:18:08.151 "traddr": "10.0.0.2", 00:18:08.151 "trsvcid": "4420" 00:18:08.151 }, 00:18:08.151 "peer_address": { 00:18:08.151 "trtype": "TCP", 00:18:08.151 "adrfam": "IPv4", 00:18:08.151 "traddr": "10.0.0.1", 00:18:08.151 "trsvcid": "42536" 00:18:08.151 }, 00:18:08.151 "auth": { 00:18:08.151 "state": "completed", 00:18:08.151 "digest": "sha512", 00:18:08.151 "dhgroup": "null" 00:18:08.151 } 00:18:08.151 } 00:18:08.151 ]' 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.151 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.412 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:08.412 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.354 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.615 00:18:09.615 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.615 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.615 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.876 { 00:18:09.876 "cntlid": 103, 00:18:09.876 "qid": 0, 00:18:09.876 "state": "enabled", 00:18:09.876 "thread": "nvmf_tgt_poll_group_000", 00:18:09.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.876 "listen_address": { 00:18:09.876 "trtype": "TCP", 00:18:09.876 "adrfam": "IPv4", 00:18:09.876 "traddr": "10.0.0.2", 00:18:09.876 "trsvcid": "4420" 00:18:09.876 }, 00:18:09.876 "peer_address": { 00:18:09.876 "trtype": "TCP", 00:18:09.876 "adrfam": "IPv4", 00:18:09.876 "traddr": "10.0.0.1", 00:18:09.876 "trsvcid": "56216" 00:18:09.876 }, 00:18:09.876 "auth": { 00:18:09.876 "state": "completed", 00:18:09.876 "digest": "sha512", 00:18:09.876 "dhgroup": "null" 00:18:09.876 } 00:18:09.876 } 00:18:09.876 ]' 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.876 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.136 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:10.136 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.077 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.077 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.078 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.342 00:18:11.342 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.342 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.342 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.604 { 00:18:11.604 "cntlid": 105, 00:18:11.604 "qid": 0, 00:18:11.604 "state": "enabled", 00:18:11.604 "thread": "nvmf_tgt_poll_group_000", 00:18:11.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:11.604 "listen_address": { 00:18:11.604 "trtype": "TCP", 00:18:11.604 "adrfam": "IPv4", 00:18:11.604 "traddr": "10.0.0.2", 00:18:11.604 "trsvcid": "4420" 00:18:11.604 }, 00:18:11.604 "peer_address": { 00:18:11.604 "trtype": "TCP", 00:18:11.604 "adrfam": "IPv4", 00:18:11.604 "traddr": "10.0.0.1", 00:18:11.604 "trsvcid": "56262" 00:18:11.604 }, 00:18:11.604 "auth": { 00:18:11.604 "state": "completed", 00:18:11.604 "digest": "sha512", 00:18:11.604 "dhgroup": "ffdhe2048" 00:18:11.604 } 00:18:11.604 } 00:18:11.604 ]' 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.604 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.864 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:11.864 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.806 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.066 00:18:13.066 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.066 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.066 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.326 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.326 { 00:18:13.326 "cntlid": 107, 00:18:13.326 "qid": 0, 00:18:13.326 "state": "enabled", 00:18:13.326 "thread": "nvmf_tgt_poll_group_000", 00:18:13.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.326 "listen_address": { 00:18:13.326 "trtype": "TCP", 00:18:13.326 "adrfam": "IPv4", 00:18:13.326 "traddr": "10.0.0.2", 00:18:13.326 "trsvcid": "4420" 00:18:13.326 }, 00:18:13.326 "peer_address": { 00:18:13.326 "trtype": "TCP", 00:18:13.326 "adrfam": "IPv4", 00:18:13.327 "traddr": "10.0.0.1", 00:18:13.327 "trsvcid": "56296" 00:18:13.327 }, 00:18:13.327 "auth": { 00:18:13.327 "state": "completed", 00:18:13.327 "digest": "sha512", 00:18:13.327 "dhgroup": "ffdhe2048" 00:18:13.327 } 00:18:13.327 } 00:18:13.327 ]' 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.327 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.587 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:13.587 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:14.157 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.418 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.678 00:18:14.678 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.678 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.679 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.939 { 00:18:14.939 "cntlid": 109, 00:18:14.939 "qid": 0, 00:18:14.939 "state": "enabled", 00:18:14.939 "thread": "nvmf_tgt_poll_group_000", 00:18:14.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.939 "listen_address": { 00:18:14.939 "trtype": "TCP", 00:18:14.939 "adrfam": "IPv4", 00:18:14.939 "traddr": "10.0.0.2", 00:18:14.939 "trsvcid": "4420" 00:18:14.939 }, 00:18:14.939 "peer_address": { 00:18:14.939 "trtype": "TCP", 00:18:14.939 "adrfam": "IPv4", 00:18:14.939 "traddr": "10.0.0.1", 00:18:14.939 "trsvcid": "56324" 00:18:14.939 }, 00:18:14.939 "auth": { 00:18:14.939 "state": "completed", 00:18:14.939 "digest": "sha512", 00:18:14.939 "dhgroup": "ffdhe2048" 00:18:14.939 } 00:18:14.939 } 00:18:14.939 ]' 00:18:14.939 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.939 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.939 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.939 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.939 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.200 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.200 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.200 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.200 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:15.200 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.141 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.402 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.663 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.663 { 00:18:16.663 "cntlid": 111, 00:18:16.663 "qid": 0, 00:18:16.663 "state": "enabled", 00:18:16.663 "thread": "nvmf_tgt_poll_group_000", 00:18:16.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:16.663 "listen_address": { 00:18:16.663 "trtype": "TCP", 00:18:16.663 "adrfam": "IPv4", 00:18:16.663 "traddr": "10.0.0.2", 00:18:16.663 "trsvcid": "4420" 00:18:16.663 }, 00:18:16.663 "peer_address": { 00:18:16.663 "trtype": "TCP", 00:18:16.663 "adrfam": "IPv4", 00:18:16.663 "traddr": "10.0.0.1", 00:18:16.663 "trsvcid": "56356" 00:18:16.663 }, 00:18:16.663 "auth": { 00:18:16.663 "state": "completed", 00:18:16.663 "digest": "sha512", 00:18:16.663 "dhgroup": "ffdhe2048" 00:18:16.663 } 00:18:16.663 } 00:18:16.663 ]' 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.663 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.924 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.924 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:16.924 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:17.866 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:17.866 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:17.866 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.127 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.127 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.388 { 00:18:18.388 "cntlid": 113, 00:18:18.388 "qid": 0, 00:18:18.388 "state": "enabled", 00:18:18.388 "thread": "nvmf_tgt_poll_group_000", 00:18:18.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:18.388 "listen_address": { 00:18:18.388 "trtype": "TCP", 00:18:18.388 "adrfam": "IPv4", 00:18:18.388 "traddr": "10.0.0.2", 00:18:18.388 "trsvcid": "4420" 00:18:18.388 }, 00:18:18.388 "peer_address": { 00:18:18.388 "trtype": "TCP", 00:18:18.388 "adrfam": "IPv4", 00:18:18.388 "traddr": "10.0.0.1", 00:18:18.388 "trsvcid": "56390" 00:18:18.388 }, 00:18:18.388 "auth": { 00:18:18.388 "state": "completed", 00:18:18.388 "digest": "sha512", 00:18:18.388 "dhgroup": "ffdhe3072" 00:18:18.388 } 00:18:18.388 } 00:18:18.388 ]' 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.388 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:18.649 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.591 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.592 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.853 00:18:19.853 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.113 { 00:18:20.113 "cntlid": 115, 00:18:20.113 "qid": 0, 00:18:20.113 "state": "enabled", 00:18:20.113 "thread": "nvmf_tgt_poll_group_000", 00:18:20.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:20.113 "listen_address": { 00:18:20.113 "trtype": "TCP", 00:18:20.113 "adrfam": "IPv4", 00:18:20.113 "traddr": "10.0.0.2", 00:18:20.113 "trsvcid": "4420" 00:18:20.113 }, 00:18:20.113 "peer_address": { 00:18:20.113 "trtype": "TCP", 00:18:20.113 "adrfam": "IPv4", 00:18:20.113 "traddr": "10.0.0.1", 00:18:20.113 "trsvcid": "56666" 00:18:20.113 }, 00:18:20.113 "auth": { 00:18:20.113 "state": "completed", 00:18:20.113 "digest": "sha512", 00:18:20.113 "dhgroup": "ffdhe3072" 00:18:20.113 } 00:18:20.113 } 00:18:20.113 ]' 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.113 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:20.374 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.316 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.576 00:18:21.576 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.576 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.576 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.836 { 00:18:21.836 "cntlid": 117, 00:18:21.836 "qid": 0, 00:18:21.836 "state": "enabled", 00:18:21.836 "thread": "nvmf_tgt_poll_group_000", 00:18:21.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.836 "listen_address": { 00:18:21.836 "trtype": "TCP", 00:18:21.836 "adrfam": "IPv4", 00:18:21.836 "traddr": "10.0.0.2", 00:18:21.836 "trsvcid": "4420" 00:18:21.836 }, 00:18:21.836 "peer_address": { 00:18:21.836 "trtype": "TCP", 00:18:21.836 "adrfam": "IPv4", 00:18:21.836 "traddr": "10.0.0.1", 00:18:21.836 "trsvcid": "56692" 00:18:21.836 }, 00:18:21.836 "auth": { 00:18:21.836 "state": "completed", 00:18:21.836 "digest": "sha512", 00:18:21.836 "dhgroup": "ffdhe3072" 00:18:21.836 } 00:18:21.836 } 00:18:21.836 ]' 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.836 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:22.097 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:23.040 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.040 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.301 00:18:23.301 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.301 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.301 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.562 { 00:18:23.562 "cntlid": 119, 00:18:23.562 "qid": 0, 00:18:23.562 "state": "enabled", 00:18:23.562 "thread": "nvmf_tgt_poll_group_000", 00:18:23.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:23.562 "listen_address": { 00:18:23.562 "trtype": "TCP", 00:18:23.562 "adrfam": "IPv4", 00:18:23.562 "traddr": "10.0.0.2", 00:18:23.562 "trsvcid": "4420" 00:18:23.562 }, 00:18:23.562 "peer_address": { 00:18:23.562 "trtype": "TCP", 00:18:23.562 "adrfam": "IPv4", 00:18:23.562 "traddr": "10.0.0.1", 00:18:23.562 "trsvcid": "56718" 00:18:23.562 }, 00:18:23.562 "auth": { 00:18:23.562 "state": "completed", 00:18:23.562 "digest": "sha512", 00:18:23.562 "dhgroup": "ffdhe3072" 00:18:23.562 } 00:18:23.562 } 00:18:23.562 ]' 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.562 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.823 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.823 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.823 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.823 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:23.823 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.765 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.026 00:18:25.026 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.026 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.026 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.287 { 00:18:25.287 "cntlid": 121, 00:18:25.287 "qid": 0, 00:18:25.287 "state": "enabled", 00:18:25.287 "thread": "nvmf_tgt_poll_group_000", 00:18:25.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:25.287 "listen_address": { 00:18:25.287 "trtype": "TCP", 00:18:25.287 "adrfam": "IPv4", 00:18:25.287 "traddr": "10.0.0.2", 00:18:25.287 "trsvcid": "4420" 00:18:25.287 }, 00:18:25.287 "peer_address": { 00:18:25.287 "trtype": "TCP", 00:18:25.287 "adrfam": "IPv4", 00:18:25.287 "traddr": "10.0.0.1", 00:18:25.287 "trsvcid": "56744" 00:18:25.287 }, 00:18:25.287 "auth": { 00:18:25.287 "state": "completed", 00:18:25.287 "digest": "sha512", 00:18:25.287 "dhgroup": "ffdhe4096" 00:18:25.287 } 00:18:25.287 } 00:18:25.287 ]' 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.287 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.549 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.549 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.549 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.549 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:25.549 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.491 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.753 00:18:26.753 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.753 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.753 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.014 { 00:18:27.014 "cntlid": 123, 00:18:27.014 "qid": 0, 00:18:27.014 "state": "enabled", 00:18:27.014 "thread": "nvmf_tgt_poll_group_000", 00:18:27.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.014 "listen_address": { 00:18:27.014 "trtype": "TCP", 00:18:27.014 "adrfam": "IPv4", 00:18:27.014 "traddr": "10.0.0.2", 00:18:27.014 "trsvcid": "4420" 00:18:27.014 }, 00:18:27.014 "peer_address": { 00:18:27.014 "trtype": "TCP", 00:18:27.014 "adrfam": "IPv4", 00:18:27.014 "traddr": "10.0.0.1", 00:18:27.014 "trsvcid": "56778" 00:18:27.014 }, 00:18:27.014 "auth": { 00:18:27.014 "state": "completed", 00:18:27.014 "digest": "sha512", 00:18:27.014 "dhgroup": "ffdhe4096" 00:18:27.014 } 00:18:27.014 } 00:18:27.014 ]' 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.014 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:27.275 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.218 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.479 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.740 { 00:18:28.740 "cntlid": 125, 00:18:28.740 "qid": 0, 00:18:28.740 "state": "enabled", 00:18:28.740 "thread": "nvmf_tgt_poll_group_000", 00:18:28.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:28.740 "listen_address": { 00:18:28.740 "trtype": "TCP", 00:18:28.740 "adrfam": "IPv4", 00:18:28.740 "traddr": "10.0.0.2", 00:18:28.740 "trsvcid": "4420" 00:18:28.740 }, 00:18:28.740 "peer_address": { 00:18:28.740 "trtype": "TCP", 00:18:28.740 "adrfam": "IPv4", 00:18:28.740 "traddr": "10.0.0.1", 00:18:28.740 "trsvcid": "56802" 00:18:28.740 }, 00:18:28.740 "auth": { 00:18:28.740 "state": "completed", 00:18:28.740 "digest": "sha512", 00:18:28.740 "dhgroup": "ffdhe4096" 00:18:28.740 } 00:18:28.740 } 00:18:28.740 ]' 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.740 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.001 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.001 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.001 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.001 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.001 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.001 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:29.001 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:29.946 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.946 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.947 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.947 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.208 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.208 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.208 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.208 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.468 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.468 { 00:18:30.468 "cntlid": 127, 00:18:30.468 "qid": 0, 00:18:30.468 "state": "enabled", 00:18:30.468 "thread": "nvmf_tgt_poll_group_000", 00:18:30.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:30.468 "listen_address": { 00:18:30.468 "trtype": "TCP", 00:18:30.468 "adrfam": "IPv4", 00:18:30.468 "traddr": "10.0.0.2", 00:18:30.468 "trsvcid": "4420" 00:18:30.468 }, 00:18:30.468 "peer_address": { 00:18:30.468 "trtype": "TCP", 00:18:30.468 "adrfam": "IPv4", 00:18:30.468 "traddr": "10.0.0.1", 00:18:30.468 "trsvcid": "57928" 00:18:30.468 }, 00:18:30.468 "auth": { 00:18:30.468 "state": "completed", 00:18:30.468 "digest": "sha512", 00:18:30.468 "dhgroup": "ffdhe4096" 00:18:30.468 } 00:18:30.468 } 00:18:30.468 ]' 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.468 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.729 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.730 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.730 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.730 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.730 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.990 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:30.990 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.562 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.823 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.083 00:18:32.083 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.084 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.084 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.344 { 00:18:32.344 "cntlid": 129, 00:18:32.344 "qid": 0, 00:18:32.344 "state": "enabled", 00:18:32.344 "thread": "nvmf_tgt_poll_group_000", 00:18:32.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:32.344 "listen_address": { 00:18:32.344 "trtype": "TCP", 00:18:32.344 "adrfam": "IPv4", 00:18:32.344 "traddr": "10.0.0.2", 00:18:32.344 "trsvcid": "4420" 00:18:32.344 }, 00:18:32.344 "peer_address": { 00:18:32.344 "trtype": "TCP", 00:18:32.344 "adrfam": "IPv4", 00:18:32.344 "traddr": "10.0.0.1", 00:18:32.344 "trsvcid": "57962" 00:18:32.344 }, 00:18:32.344 "auth": { 00:18:32.344 "state": "completed", 00:18:32.344 "digest": "sha512", 00:18:32.344 "dhgroup": "ffdhe6144" 00:18:32.344 } 00:18:32.344 } 00:18:32.344 ]' 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.344 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.605 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.605 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.605 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.605 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:32.605 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.546 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.118 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.118 { 00:18:34.118 "cntlid": 131, 00:18:34.118 "qid": 0, 00:18:34.118 "state": "enabled", 00:18:34.118 "thread": "nvmf_tgt_poll_group_000", 00:18:34.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:34.118 "listen_address": { 00:18:34.118 "trtype": "TCP", 00:18:34.118 "adrfam": "IPv4", 00:18:34.118 "traddr": "10.0.0.2", 00:18:34.118 "trsvcid": "4420" 00:18:34.118 }, 00:18:34.118 "peer_address": { 00:18:34.118 "trtype": "TCP", 00:18:34.118 "adrfam": "IPv4", 00:18:34.118 "traddr": "10.0.0.1", 00:18:34.118 "trsvcid": "57972" 00:18:34.118 }, 00:18:34.118 "auth": { 00:18:34.118 "state": "completed", 00:18:34.118 "digest": "sha512", 00:18:34.118 "dhgroup": "ffdhe6144" 00:18:34.118 } 00:18:34.118 } 00:18:34.118 ]' 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.118 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:34.379 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.320 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.581 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.842 00:18:35.842 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.842 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.842 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.103 { 00:18:36.103 "cntlid": 133, 00:18:36.103 "qid": 0, 00:18:36.103 "state": "enabled", 00:18:36.103 "thread": "nvmf_tgt_poll_group_000", 00:18:36.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.103 "listen_address": { 00:18:36.103 "trtype": "TCP", 00:18:36.103 "adrfam": "IPv4", 00:18:36.103 "traddr": "10.0.0.2", 00:18:36.103 "trsvcid": "4420" 00:18:36.103 }, 00:18:36.103 "peer_address": { 00:18:36.103 "trtype": "TCP", 00:18:36.103 "adrfam": "IPv4", 00:18:36.103 "traddr": "10.0.0.1", 00:18:36.103 "trsvcid": "58008" 00:18:36.103 }, 00:18:36.103 "auth": { 00:18:36.103 "state": "completed", 00:18:36.103 "digest": "sha512", 00:18:36.103 "dhgroup": "ffdhe6144" 00:18:36.103 } 00:18:36.103 } 00:18:36.103 ]' 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.103 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.364 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:36.364 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:36.936 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.937 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.198 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.459 00:18:37.459 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.459 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.459 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.721 { 00:18:37.721 "cntlid": 135, 00:18:37.721 "qid": 0, 00:18:37.721 "state": "enabled", 00:18:37.721 "thread": "nvmf_tgt_poll_group_000", 00:18:37.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:37.721 "listen_address": { 00:18:37.721 "trtype": "TCP", 00:18:37.721 "adrfam": "IPv4", 00:18:37.721 "traddr": "10.0.0.2", 00:18:37.721 "trsvcid": "4420" 00:18:37.721 }, 00:18:37.721 "peer_address": { 00:18:37.721 "trtype": "TCP", 00:18:37.721 "adrfam": "IPv4", 00:18:37.721 "traddr": "10.0.0.1", 00:18:37.721 "trsvcid": "58034" 00:18:37.721 }, 00:18:37.721 "auth": { 00:18:37.721 "state": "completed", 00:18:37.721 "digest": "sha512", 00:18:37.721 "dhgroup": "ffdhe6144" 00:18:37.721 } 00:18:37.721 } 00:18:37.721 ]' 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.721 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.983 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.983 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.983 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.983 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.983 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.983 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:37.983 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.938 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.938 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.938 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.938 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.938 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.517 00:18:39.517 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.517 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.517 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.778 { 00:18:39.778 "cntlid": 137, 00:18:39.778 "qid": 0, 00:18:39.778 "state": "enabled", 00:18:39.778 "thread": "nvmf_tgt_poll_group_000", 00:18:39.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:39.778 "listen_address": { 00:18:39.778 "trtype": "TCP", 00:18:39.778 "adrfam": "IPv4", 00:18:39.778 "traddr": "10.0.0.2", 00:18:39.778 "trsvcid": "4420" 00:18:39.778 }, 00:18:39.778 "peer_address": { 00:18:39.778 "trtype": "TCP", 00:18:39.778 "adrfam": "IPv4", 00:18:39.778 "traddr": "10.0.0.1", 00:18:39.778 "trsvcid": "38256" 00:18:39.778 }, 00:18:39.778 "auth": { 00:18:39.778 "state": "completed", 00:18:39.778 "digest": "sha512", 00:18:39.778 "dhgroup": "ffdhe8192" 00:18:39.778 } 00:18:39.778 } 00:18:39.778 ]' 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.778 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.040 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:40.040 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:40.613 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.874 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.447 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.447 { 00:18:41.447 "cntlid": 139, 00:18:41.447 "qid": 0, 00:18:41.447 "state": "enabled", 00:18:41.447 "thread": "nvmf_tgt_poll_group_000", 00:18:41.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:41.447 "listen_address": { 00:18:41.447 "trtype": "TCP", 00:18:41.447 "adrfam": "IPv4", 00:18:41.447 "traddr": "10.0.0.2", 00:18:41.447 "trsvcid": "4420" 00:18:41.447 }, 00:18:41.447 "peer_address": { 00:18:41.447 "trtype": "TCP", 00:18:41.447 "adrfam": "IPv4", 00:18:41.447 "traddr": "10.0.0.1", 00:18:41.447 "trsvcid": "38282" 00:18:41.447 }, 00:18:41.447 "auth": { 00:18:41.447 "state": "completed", 00:18:41.447 "digest": "sha512", 00:18:41.447 "dhgroup": "ffdhe8192" 00:18:41.447 } 00:18:41.447 } 00:18:41.447 ]' 00:18:41.447 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.709 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.970 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:41.971 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: --dhchap-ctrl-secret DHHC-1:02:ODQyYzNmYWYwOTQ5ZTkyZTMyMzEzY2QwMGFjNGJjYzM3ZTM2YTgwZTQ2NDE0MzQ3u/dk4w==: 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.542 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.804 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.376 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.376 { 00:18:43.376 "cntlid": 141, 00:18:43.376 "qid": 0, 00:18:43.376 "state": "enabled", 00:18:43.376 "thread": "nvmf_tgt_poll_group_000", 00:18:43.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:43.376 "listen_address": { 00:18:43.376 "trtype": "TCP", 00:18:43.376 "adrfam": "IPv4", 00:18:43.376 "traddr": "10.0.0.2", 00:18:43.376 "trsvcid": "4420" 00:18:43.376 }, 00:18:43.376 "peer_address": { 00:18:43.376 "trtype": "TCP", 00:18:43.376 "adrfam": "IPv4", 00:18:43.376 "traddr": "10.0.0.1", 00:18:43.376 "trsvcid": "38312" 00:18:43.376 }, 00:18:43.376 "auth": { 00:18:43.376 "state": "completed", 00:18:43.376 "digest": "sha512", 00:18:43.376 "dhgroup": "ffdhe8192" 00:18:43.376 } 00:18:43.376 } 00:18:43.376 ]' 00:18:43.376 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.638 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.899 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:43.899 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:01:M2Y5NjRlMGJiYTVlZTMxNWNmMzY5NTY2N2QxNWJlYTZ6KRQB: 00:18:44.471 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.471 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.471 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.471 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.732 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.304 00:18:45.304 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.304 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.304 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.566 { 00:18:45.566 "cntlid": 143, 00:18:45.566 "qid": 0, 00:18:45.566 "state": "enabled", 00:18:45.566 "thread": "nvmf_tgt_poll_group_000", 00:18:45.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:45.566 "listen_address": { 00:18:45.566 "trtype": "TCP", 00:18:45.566 "adrfam": "IPv4", 00:18:45.566 "traddr": "10.0.0.2", 00:18:45.566 "trsvcid": "4420" 00:18:45.566 }, 00:18:45.566 "peer_address": { 00:18:45.566 "trtype": "TCP", 00:18:45.566 "adrfam": "IPv4", 00:18:45.566 "traddr": "10.0.0.1", 00:18:45.566 "trsvcid": "38336" 00:18:45.566 }, 00:18:45.566 "auth": { 00:18:45.566 "state": "completed", 00:18:45.566 "digest": "sha512", 00:18:45.566 "dhgroup": "ffdhe8192" 00:18:45.566 } 00:18:45.566 } 00:18:45.566 ]' 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.566 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.827 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:45.827 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.769 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.340 00:18:47.340 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.340 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.340 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.601 { 00:18:47.601 "cntlid": 145, 00:18:47.601 "qid": 0, 00:18:47.601 "state": "enabled", 00:18:47.601 "thread": "nvmf_tgt_poll_group_000", 00:18:47.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:47.601 "listen_address": { 00:18:47.601 "trtype": "TCP", 00:18:47.601 "adrfam": "IPv4", 00:18:47.601 "traddr": "10.0.0.2", 00:18:47.601 "trsvcid": "4420" 00:18:47.601 }, 00:18:47.601 "peer_address": { 00:18:47.601 "trtype": "TCP", 00:18:47.601 "adrfam": "IPv4", 00:18:47.601 "traddr": "10.0.0.1", 00:18:47.601 "trsvcid": "38356" 00:18:47.601 }, 00:18:47.601 "auth": { 00:18:47.601 "state": "completed", 00:18:47.601 "digest": "sha512", 00:18:47.601 "dhgroup": "ffdhe8192" 00:18:47.601 } 00:18:47.601 } 00:18:47.601 ]' 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.601 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.862 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:47.862 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NWIxNWU4ZmFjYmZmN2ZiNjEwZjExODAwOGRlY2MyZjE4ZmRlMjEyNmI1NTFkM2I3519MwQ==: --dhchap-ctrl-secret DHHC-1:03:NmE1MjFkMWIyNzkwNTQxZjZiZmQ0N2RlNGQyNmE0MjgzMTA5MjZiYmVhNzhjMzc5MjlmNTAzNTcyZGRkZjcwNYvc3lY=: 00:18:48.434 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:48.695 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:48.955 request: 00:18:48.955 { 00:18:48.955 "name": "nvme0", 00:18:48.955 "trtype": "tcp", 00:18:48.955 "traddr": "10.0.0.2", 00:18:48.955 "adrfam": "ipv4", 00:18:48.955 "trsvcid": "4420", 00:18:48.955 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:48.955 "prchk_reftag": false, 00:18:48.955 "prchk_guard": false, 00:18:48.955 "hdgst": false, 00:18:48.955 "ddgst": false, 00:18:48.955 "dhchap_key": "key2", 00:18:48.955 "allow_unrecognized_csi": false, 00:18:48.955 "method": "bdev_nvme_attach_controller", 00:18:48.955 "req_id": 1 00:18:48.955 } 00:18:48.955 Got JSON-RPC error response 00:18:48.955 response: 00:18:48.955 { 00:18:48.955 "code": -5, 00:18:48.955 "message": "Input/output error" 00:18:48.955 } 00:18:49.215 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:49.216 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:49.476 request: 00:18:49.476 { 00:18:49.476 "name": "nvme0", 00:18:49.476 "trtype": "tcp", 00:18:49.476 "traddr": "10.0.0.2", 00:18:49.476 "adrfam": "ipv4", 00:18:49.476 "trsvcid": "4420", 00:18:49.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:49.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:49.476 "prchk_reftag": false, 00:18:49.476 "prchk_guard": false, 00:18:49.476 "hdgst": false, 00:18:49.476 "ddgst": false, 00:18:49.476 "dhchap_key": "key1", 00:18:49.476 "dhchap_ctrlr_key": "ckey2", 00:18:49.476 "allow_unrecognized_csi": false, 00:18:49.476 "method": "bdev_nvme_attach_controller", 00:18:49.476 "req_id": 1 00:18:49.476 } 00:18:49.476 Got JSON-RPC error response 00:18:49.476 response: 00:18:49.476 { 00:18:49.476 "code": -5, 00:18:49.476 "message": "Input/output error" 00:18:49.476 } 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.476 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.758 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.020 request: 00:18:50.020 { 00:18:50.020 "name": "nvme0", 00:18:50.020 "trtype": "tcp", 00:18:50.020 "traddr": "10.0.0.2", 00:18:50.020 "adrfam": "ipv4", 00:18:50.020 "trsvcid": "4420", 00:18:50.020 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:50.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.020 "prchk_reftag": false, 00:18:50.020 "prchk_guard": false, 00:18:50.020 "hdgst": false, 00:18:50.020 "ddgst": false, 00:18:50.020 "dhchap_key": "key1", 00:18:50.020 "dhchap_ctrlr_key": "ckey1", 00:18:50.020 "allow_unrecognized_csi": false, 00:18:50.020 "method": "bdev_nvme_attach_controller", 00:18:50.020 "req_id": 1 00:18:50.020 } 00:18:50.020 Got JSON-RPC error response 00:18:50.020 response: 00:18:50.020 { 00:18:50.020 "code": -5, 00:18:50.020 "message": "Input/output error" 00:18:50.020 } 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3496662 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3496662 ']' 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3496662 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.020 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3496662 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3496662' 00:18:50.282 killing process with pid 3496662 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3496662 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3496662 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3524585 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3524585 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3524585 ']' 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.282 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3524585 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3524585 ']' 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.224 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 null0 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gHL 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hwb ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hwb 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.oj0 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Gjh ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gjh 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Qij 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ALc ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ALc 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:51.485 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9kL 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.486 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.428 nvme0n1 00:18:52.428 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.428 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.428 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.689 { 00:18:52.689 "cntlid": 1, 00:18:52.689 "qid": 0, 00:18:52.689 "state": "enabled", 00:18:52.689 "thread": "nvmf_tgt_poll_group_000", 00:18:52.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.689 "listen_address": { 00:18:52.689 "trtype": "TCP", 00:18:52.689 "adrfam": "IPv4", 00:18:52.689 "traddr": "10.0.0.2", 00:18:52.689 "trsvcid": "4420" 00:18:52.689 }, 00:18:52.689 "peer_address": { 00:18:52.689 "trtype": "TCP", 00:18:52.689 "adrfam": "IPv4", 00:18:52.689 "traddr": "10.0.0.1", 00:18:52.689 "trsvcid": "53024" 00:18:52.689 }, 00:18:52.689 "auth": { 00:18:52.689 "state": "completed", 00:18:52.689 "digest": "sha512", 00:18:52.689 "dhgroup": "ffdhe8192" 00:18:52.689 } 00:18:52.689 } 00:18:52.689 ]' 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.689 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.949 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:52.949 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:53.612 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.898 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.189 request: 00:18:54.189 { 00:18:54.189 "name": "nvme0", 00:18:54.189 "trtype": "tcp", 00:18:54.189 "traddr": "10.0.0.2", 00:18:54.189 "adrfam": "ipv4", 00:18:54.189 "trsvcid": "4420", 00:18:54.189 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:54.189 "prchk_reftag": false, 00:18:54.189 "prchk_guard": false, 00:18:54.189 "hdgst": false, 00:18:54.189 "ddgst": false, 00:18:54.189 "dhchap_key": "key3", 00:18:54.189 "allow_unrecognized_csi": false, 00:18:54.189 "method": "bdev_nvme_attach_controller", 00:18:54.189 "req_id": 1 00:18:54.189 } 00:18:54.189 Got JSON-RPC error response 00:18:54.189 response: 00:18:54.189 { 00:18:54.189 "code": -5, 00:18:54.189 "message": "Input/output error" 00:18:54.189 } 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.189 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.480 request: 00:18:54.480 { 00:18:54.480 "name": "nvme0", 00:18:54.480 "trtype": "tcp", 00:18:54.480 "traddr": "10.0.0.2", 00:18:54.480 "adrfam": "ipv4", 00:18:54.480 "trsvcid": "4420", 00:18:54.480 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:54.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:54.480 "prchk_reftag": false, 00:18:54.480 "prchk_guard": false, 00:18:54.480 "hdgst": false, 00:18:54.480 "ddgst": false, 00:18:54.480 "dhchap_key": "key3", 00:18:54.480 "allow_unrecognized_csi": false, 00:18:54.480 "method": "bdev_nvme_attach_controller", 00:18:54.480 "req_id": 1 00:18:54.480 } 00:18:54.480 Got JSON-RPC error response 00:18:54.480 response: 00:18:54.480 { 00:18:54.480 "code": -5, 00:18:54.480 "message": "Input/output error" 00:18:54.480 } 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.480 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.778 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.056 request: 00:18:55.056 { 00:18:55.056 "name": "nvme0", 00:18:55.056 "trtype": "tcp", 00:18:55.056 "traddr": "10.0.0.2", 00:18:55.056 "adrfam": "ipv4", 00:18:55.056 "trsvcid": "4420", 00:18:55.056 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:55.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:55.056 "prchk_reftag": false, 00:18:55.056 "prchk_guard": false, 00:18:55.056 "hdgst": false, 00:18:55.056 "ddgst": false, 00:18:55.056 "dhchap_key": "key0", 00:18:55.056 "dhchap_ctrlr_key": "key1", 00:18:55.056 "allow_unrecognized_csi": false, 00:18:55.056 "method": "bdev_nvme_attach_controller", 00:18:55.056 "req_id": 1 00:18:55.056 } 00:18:55.056 Got JSON-RPC error response 00:18:55.056 response: 00:18:55.056 { 00:18:55.056 "code": -5, 00:18:55.056 "message": "Input/output error" 00:18:55.056 } 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:55.056 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:55.056 nvme0n1 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.316 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:55.574 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:56.512 nvme0n1 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:56.512 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.773 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.773 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:56.773 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: --dhchap-ctrl-secret DHHC-1:03:ZWVjNGY2NDU2ODZhYzdhYjM4ZGRkMzdjNTk2YjA5MDg2ZmVlMGNkYmY1MzcyNWJlOWFiMDQxNjNhMjI0ZTM5NFLeT5E=: 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:57.713 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:58.282 request: 00:18:58.282 { 00:18:58.282 "name": "nvme0", 00:18:58.282 "trtype": "tcp", 00:18:58.282 "traddr": "10.0.0.2", 00:18:58.282 "adrfam": "ipv4", 00:18:58.282 "trsvcid": "4420", 00:18:58.282 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:58.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:58.282 "prchk_reftag": false, 00:18:58.282 "prchk_guard": false, 00:18:58.282 "hdgst": false, 00:18:58.282 "ddgst": false, 00:18:58.282 "dhchap_key": "key1", 00:18:58.282 "allow_unrecognized_csi": false, 00:18:58.282 "method": "bdev_nvme_attach_controller", 00:18:58.282 "req_id": 1 00:18:58.282 } 00:18:58.282 Got JSON-RPC error response 00:18:58.282 response: 00:18:58.282 { 00:18:58.282 "code": -5, 00:18:58.282 "message": "Input/output error" 00:18:58.282 } 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.282 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.222 nvme0n1 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.222 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.482 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:59.483 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:59.743 nvme0n1 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.743 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: '' 2s 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: ]] 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTMyYTA5MTVmZTllZjRjNjFlYzRkZGY3OWI3YjU1NDL3NVPX: 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:00.003 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: 2s 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: ]] 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTZiNzY3NWM5ZTU4ZDlkZDUzNzBjNzg1NjY3ZTEwOWZhZGM0MmQ5YTViOWNkYmYx++tksA==: 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:02.545 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:04.460 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:05.030 nvme0n1 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.030 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:05.601 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:05.601 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:05.601 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:05.862 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.122 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.693 request: 00:19:06.693 { 00:19:06.693 "name": "nvme0", 00:19:06.693 "dhchap_key": "key1", 00:19:06.693 "dhchap_ctrlr_key": "key3", 00:19:06.693 "method": "bdev_nvme_set_keys", 00:19:06.693 "req_id": 1 00:19:06.693 } 00:19:06.693 Got JSON-RPC error response 00:19:06.693 response: 00:19:06.693 { 00:19:06.693 "code": -13, 00:19:06.693 "message": "Permission denied" 00:19:06.693 } 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:06.693 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:08.076 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:08.076 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.076 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:08.076 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:09.026 nvme0n1 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:09.026 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:09.286 request: 00:19:09.286 { 00:19:09.286 "name": "nvme0", 00:19:09.286 "dhchap_key": "key2", 00:19:09.286 "dhchap_ctrlr_key": "key0", 00:19:09.286 "method": "bdev_nvme_set_keys", 00:19:09.286 "req_id": 1 00:19:09.286 } 00:19:09.286 Got JSON-RPC error response 00:19:09.286 response: 00:19:09.286 { 00:19:09.286 "code": -13, 00:19:09.286 "message": "Permission denied" 00:19:09.286 } 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:09.286 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.547 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:09.547 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:10.488 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:10.488 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:10.488 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3496963 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3496963 ']' 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3496963 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3496963 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3496963' 00:19:10.750 killing process with pid 3496963 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3496963 00:19:10.750 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3496963 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.011 rmmod nvme_tcp 00:19:11.011 rmmod nvme_fabrics 00:19:11.011 rmmod nvme_keyring 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3524585 ']' 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3524585 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3524585 ']' 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3524585 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.011 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3524585 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3524585' 00:19:11.273 killing process with pid 3524585 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3524585 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3524585 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.273 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.818 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.818 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gHL /tmp/spdk.key-sha256.oj0 /tmp/spdk.key-sha384.Qij /tmp/spdk.key-sha512.9kL /tmp/spdk.key-sha512.hwb /tmp/spdk.key-sha384.Gjh /tmp/spdk.key-sha256.ALc '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:13.818 00:19:13.818 real 2m44.929s 00:19:13.818 user 6m7.810s 00:19:13.818 sys 0m24.364s 00:19:13.818 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.818 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.818 ************************************ 00:19:13.818 END TEST nvmf_auth_target 00:19:13.818 ************************************ 00:19:13.818 11:34:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 ************************************ 00:19:13.819 START TEST nvmf_bdevio_no_huge 00:19:13.819 ************************************ 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:13.819 * Looking for test storage... 00:19:13.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.819 --rc genhtml_branch_coverage=1 00:19:13.819 --rc genhtml_function_coverage=1 00:19:13.819 --rc genhtml_legend=1 00:19:13.819 --rc geninfo_all_blocks=1 00:19:13.819 --rc geninfo_unexecuted_blocks=1 00:19:13.819 00:19:13.819 ' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.819 --rc genhtml_branch_coverage=1 00:19:13.819 --rc genhtml_function_coverage=1 00:19:13.819 --rc genhtml_legend=1 00:19:13.819 --rc geninfo_all_blocks=1 00:19:13.819 --rc geninfo_unexecuted_blocks=1 00:19:13.819 00:19:13.819 ' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.819 --rc genhtml_branch_coverage=1 00:19:13.819 --rc genhtml_function_coverage=1 00:19:13.819 --rc genhtml_legend=1 00:19:13.819 --rc geninfo_all_blocks=1 00:19:13.819 --rc geninfo_unexecuted_blocks=1 00:19:13.819 00:19:13.819 ' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.819 --rc genhtml_branch_coverage=1 00:19:13.819 --rc genhtml_function_coverage=1 00:19:13.819 --rc genhtml_legend=1 00:19:13.819 --rc geninfo_all_blocks=1 00:19:13.819 --rc geninfo_unexecuted_blocks=1 00:19:13.819 00:19:13.819 ' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.819 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.820 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.959 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:21.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:21.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:21.960 Found net devices under 0000:31:00.0: cvl_0_0 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:21.960 Found net devices under 0000:31:00.1: cvl_0_1 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.960 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:19:21.960 00:19:21.960 --- 10.0.0.2 ping statistics --- 00:19:21.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.960 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:19:21.960 00:19:21.960 --- 10.0.0.1 ping statistics --- 00:19:21.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.960 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3532925 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3532925 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3532925 ']' 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.960 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.960 [2024-12-09 11:34:13.345566] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:19:21.960 [2024-12-09 11:34:13.345638] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:21.960 [2024-12-09 11:34:13.450421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.960 [2024-12-09 11:34:13.511175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.960 [2024-12-09 11:34:13.511218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.960 [2024-12-09 11:34:13.511227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.960 [2024-12-09 11:34:13.511234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.960 [2024-12-09 11:34:13.511240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.960 [2024-12-09 11:34:13.512660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.960 [2024-12-09 11:34:13.512818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.961 [2024-12-09 11:34:13.512977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.961 [2024-12-09 11:34:13.512978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 [2024-12-09 11:34:14.202068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 Malloc0 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.222 [2024-12-09 11:34:14.256007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:22.222 { 00:19:22.222 "params": { 00:19:22.222 "name": "Nvme$subsystem", 00:19:22.222 "trtype": "$TEST_TRANSPORT", 00:19:22.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.222 "adrfam": "ipv4", 00:19:22.222 "trsvcid": "$NVMF_PORT", 00:19:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.222 "hdgst": ${hdgst:-false}, 00:19:22.222 "ddgst": ${ddgst:-false} 00:19:22.222 }, 00:19:22.222 "method": "bdev_nvme_attach_controller" 00:19:22.222 } 00:19:22.222 EOF 00:19:22.222 )") 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:22.222 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:22.222 "params": { 00:19:22.222 "name": "Nvme1", 00:19:22.222 "trtype": "tcp", 00:19:22.222 "traddr": "10.0.0.2", 00:19:22.222 "adrfam": "ipv4", 00:19:22.222 "trsvcid": "4420", 00:19:22.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.222 "hdgst": false, 00:19:22.222 "ddgst": false 00:19:22.222 }, 00:19:22.222 "method": "bdev_nvme_attach_controller" 00:19:22.222 }' 00:19:22.222 [2024-12-09 11:34:14.313124] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:19:22.222 [2024-12-09 11:34:14.313199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3533281 ] 00:19:22.482 [2024-12-09 11:34:14.394859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.482 [2024-12-09 11:34:14.450600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.482 [2024-12-09 11:34:14.450727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.482 [2024-12-09 11:34:14.450731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.742 I/O targets: 00:19:22.742 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.742 00:19:22.742 00:19:22.742 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.742 http://cunit.sourceforge.net/ 00:19:22.742 00:19:22.742 00:19:22.742 Suite: bdevio tests on: Nvme1n1 00:19:22.743 Test: blockdev write read block ...passed 00:19:22.743 Test: blockdev write zeroes read block ...passed 00:19:22.743 Test: blockdev write zeroes read no split ...passed 00:19:22.743 Test: blockdev write zeroes read split ...passed 00:19:22.743 Test: blockdev write zeroes read split partial ...passed 00:19:22.743 Test: blockdev reset ...[2024-12-09 11:34:14.838370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:22.743 [2024-12-09 11:34:14.838431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93af70 (9): Bad file descriptor 00:19:22.743 [2024-12-09 11:34:14.855199] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:22.743 passed 00:19:22.743 Test: blockdev write read 8 blocks ...passed 00:19:22.743 Test: blockdev write read size > 128k ...passed 00:19:22.743 Test: blockdev write read invalid size ...passed 00:19:23.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:23.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:23.002 Test: blockdev write read max offset ...passed 00:19:23.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:23.002 Test: blockdev writev readv 8 blocks ...passed 00:19:23.002 Test: blockdev writev readv 30 x 1block ...passed 00:19:23.002 Test: blockdev writev readv block ...passed 00:19:23.002 Test: blockdev writev readv size > 128k ...passed 00:19:23.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:23.002 Test: blockdev comparev and writev ...[2024-12-09 11:34:15.161925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.161949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.002 [2024-12-09 11:34:15.161961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.161967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.002 [2024-12-09 11:34:15.162460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.162468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.002 [2024-12-09 11:34:15.162484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.162490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.002 [2024-12-09 11:34:15.162960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.002 [2024-12-09 11:34:15.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.002 [2024-12-09 11:34:15.162983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.267 [2024-12-09 11:34:15.163467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.267 [2024-12-09 11:34:15.163476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.267 [2024-12-09 11:34:15.163485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:23.267 [2024-12-09 11:34:15.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.267 passed 00:19:23.267 Test: blockdev nvme passthru rw ...passed 00:19:23.267 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:34:15.247861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.267 [2024-12-09 11:34:15.247872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.267 [2024-12-09 11:34:15.248220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.267 [2024-12-09 11:34:15.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.267 [2024-12-09 11:34:15.248564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.267 [2024-12-09 11:34:15.248572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.267 [2024-12-09 11:34:15.248884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:23.267 [2024-12-09 11:34:15.248892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.267 passed 00:19:23.267 Test: blockdev nvme admin passthru ...passed 00:19:23.267 Test: blockdev copy ...passed 00:19:23.267 00:19:23.267 Run Summary: Type Total Ran Passed Failed Inactive 00:19:23.267 suites 1 1 n/a 0 0 00:19:23.267 tests 23 23 23 0 0 00:19:23.267 asserts 152 152 152 0 n/a 00:19:23.267 00:19:23.267 Elapsed time = 1.300 seconds 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:23.526 rmmod nvme_tcp 00:19:23.526 rmmod nvme_fabrics 00:19:23.526 rmmod nvme_keyring 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3532925 ']' 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3532925 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3532925 ']' 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3532925 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.526 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3532925 00:19:23.786 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:23.786 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:23.786 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3532925' 00:19:23.786 killing process with pid 3532925 00:19:23.786 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3532925 00:19:23.786 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3532925 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.046 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:25.957 00:19:25.957 real 0m12.567s 00:19:25.957 user 0m14.075s 00:19:25.957 sys 0m6.651s 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.957 ************************************ 00:19:25.957 END TEST nvmf_bdevio_no_huge 00:19:25.957 ************************************ 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.957 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.219 ************************************ 00:19:26.219 START TEST nvmf_tls 00:19:26.219 ************************************ 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:26.219 * Looking for test storage... 00:19:26.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.219 --rc genhtml_branch_coverage=1 00:19:26.219 --rc genhtml_function_coverage=1 00:19:26.219 --rc genhtml_legend=1 00:19:26.219 --rc geninfo_all_blocks=1 00:19:26.219 --rc geninfo_unexecuted_blocks=1 00:19:26.219 00:19:26.219 ' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.219 --rc genhtml_branch_coverage=1 00:19:26.219 --rc genhtml_function_coverage=1 00:19:26.219 --rc genhtml_legend=1 00:19:26.219 --rc geninfo_all_blocks=1 00:19:26.219 --rc geninfo_unexecuted_blocks=1 00:19:26.219 00:19:26.219 ' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.219 --rc genhtml_branch_coverage=1 00:19:26.219 --rc genhtml_function_coverage=1 00:19:26.219 --rc genhtml_legend=1 00:19:26.219 --rc geninfo_all_blocks=1 00:19:26.219 --rc geninfo_unexecuted_blocks=1 00:19:26.219 00:19:26.219 ' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.219 --rc genhtml_branch_coverage=1 00:19:26.219 --rc genhtml_function_coverage=1 00:19:26.219 --rc genhtml_legend=1 00:19:26.219 --rc geninfo_all_blocks=1 00:19:26.219 --rc geninfo_unexecuted_blocks=1 00:19:26.219 00:19:26.219 ' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.219 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.220 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:34.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:34.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:34.376 Found net devices under 0000:31:00.0: cvl_0_0 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:34.376 Found net devices under 0000:31:00.1: cvl_0_1 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.376 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:34.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:19:34.377 00:19:34.377 --- 10.0.0.2 ping statistics --- 00:19:34.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.377 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:19:34.377 00:19:34.377 --- 10.0.0.1 ping statistics --- 00:19:34.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.377 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3537716 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3537716 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3537716 ']' 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.377 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.377 [2024-12-09 11:34:25.845404] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:19:34.377 [2024-12-09 11:34:25.845470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.377 [2024-12-09 11:34:25.948659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.377 [2024-12-09 11:34:25.999529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.377 [2024-12-09 11:34:25.999582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.377 [2024-12-09 11:34:25.999591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.377 [2024-12-09 11:34:25.999599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.377 [2024-12-09 11:34:25.999605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.377 [2024-12-09 11:34:26.000404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:34.638 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:34.898 true 00:19:34.898 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.898 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:35.159 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:35.159 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:35.159 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:35.159 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:35.159 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:35.419 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:35.419 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:35.419 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:35.680 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:35.680 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:35.941 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:35.941 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:35.941 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:35.941 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:35.941 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:35.941 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:35.941 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:36.202 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:36.202 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:36.462 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:36.462 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:36.462 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:36.462 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:36.462 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.gFK73LExL7 00:19:36.722 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uBUfthLvJ1 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gFK73LExL7 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uBUfthLvJ1 00:19:36.982 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:36.982 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:37.242 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.gFK73LExL7 00:19:37.242 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gFK73LExL7 00:19:37.242 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.503 [2024-12-09 11:34:29.452092] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.503 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.503 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.763 [2024-12-09 11:34:29.776869] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.763 [2024-12-09 11:34:29.777078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.763 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.023 malloc0 00:19:38.023 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.023 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gFK73LExL7 00:19:38.283 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.544 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gFK73LExL7 00:19:48.546 Initializing NVMe Controllers 00:19:48.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:48.546 Initialization complete. Launching workers. 00:19:48.546 ======================================================== 00:19:48.546 Latency(us) 00:19:48.546 Device Information : IOPS MiB/s Average min max 00:19:48.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18606.25 72.68 3439.71 1100.81 4258.19 00:19:48.546 ======================================================== 00:19:48.546 Total : 18606.25 72.68 3439.71 1100.81 4258.19 00:19:48.546 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFK73LExL7 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gFK73LExL7 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3540748 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3540748 /var/tmp/bdevperf.sock 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3540748 ']' 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.546 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.546 [2024-12-09 11:34:40.615497] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:19:48.546 [2024-12-09 11:34:40.615556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540748 ] 00:19:48.546 [2024-12-09 11:34:40.674575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.546 [2024-12-09 11:34:40.703612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.806 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.806 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.806 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gFK73LExL7 00:19:48.806 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.066 [2024-12-09 11:34:41.097879] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.066 TLSTESTn1 00:19:49.066 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.326 Running I/O for 10 seconds... 00:19:51.207 5184.00 IOPS, 20.25 MiB/s [2024-12-09T10:34:44.308Z] 5841.00 IOPS, 22.82 MiB/s [2024-12-09T10:34:45.692Z] 5967.33 IOPS, 23.31 MiB/s [2024-12-09T10:34:46.633Z] 6034.50 IOPS, 23.57 MiB/s [2024-12-09T10:34:47.574Z] 5868.20 IOPS, 22.92 MiB/s [2024-12-09T10:34:48.514Z] 5894.33 IOPS, 23.02 MiB/s [2024-12-09T10:34:49.454Z] 5909.00 IOPS, 23.08 MiB/s [2024-12-09T10:34:50.395Z] 5984.38 IOPS, 23.38 MiB/s [2024-12-09T10:34:51.334Z] 5991.56 IOPS, 23.40 MiB/s [2024-12-09T10:34:51.594Z] 5962.30 IOPS, 23.29 MiB/s 00:19:59.432 Latency(us) 00:19:59.432 [2024-12-09T10:34:51.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.432 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.432 Verification LBA range: start 0x0 length 0x2000 00:19:59.433 TLSTESTn1 : 10.04 5950.88 23.25 0.00 0.00 21457.14 5106.35 82138.45 00:19:59.433 [2024-12-09T10:34:51.595Z] =================================================================================================================== 00:19:59.433 [2024-12-09T10:34:51.595Z] Total : 5950.88 23.25 0.00 0.00 21457.14 5106.35 82138.45 00:19:59.433 { 00:19:59.433 "results": [ 00:19:59.433 { 00:19:59.433 "job": "TLSTESTn1", 00:19:59.433 "core_mask": "0x4", 00:19:59.433 "workload": "verify", 00:19:59.433 "status": "finished", 00:19:59.433 "verify_range": { 00:19:59.433 "start": 0, 00:19:59.433 "length": 8192 00:19:59.433 }, 00:19:59.433 "queue_depth": 128, 00:19:59.433 "io_size": 4096, 00:19:59.433 "runtime": 10.040526, 00:19:59.433 "iops": 5950.8834497316175, 00:19:59.433 "mibps": 23.24563847551413, 00:19:59.433 "io_failed": 0, 00:19:59.433 "io_timeout": 0, 00:19:59.433 "avg_latency_us": 21457.13924641562, 00:19:59.433 "min_latency_us": 5106.346666666666, 00:19:59.433 "max_latency_us": 82138.45333333334 00:19:59.433 } 00:19:59.433 ], 00:19:59.433 "core_count": 1 00:19:59.433 } 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3540748 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3540748 ']' 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3540748 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3540748 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3540748' 00:19:59.433 killing process with pid 3540748 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3540748 00:19:59.433 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.433 00:19:59.433 Latency(us) 00:19:59.433 [2024-12-09T10:34:51.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.433 [2024-12-09T10:34:51.595Z] =================================================================================================================== 00:19:59.433 [2024-12-09T10:34:51.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3540748 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uBUfthLvJ1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uBUfthLvJ1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uBUfthLvJ1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uBUfthLvJ1 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3542761 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3542761 /var/tmp/bdevperf.sock 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3542761 ']' 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.433 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.433 [2024-12-09 11:34:51.587628] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:19:59.433 [2024-12-09 11:34:51.587686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542761 ] 00:19:59.693 [2024-12-09 11:34:51.646312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.693 [2024-12-09 11:34:51.674839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.693 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.693 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.693 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uBUfthLvJ1 00:19:59.953 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.953 [2024-12-09 11:34:52.057092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.953 [2024-12-09 11:34:52.066048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.953 [2024-12-09 11:34:52.066374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7c5b0 (107): Transport endpoint is not connected 00:19:59.953 [2024-12-09 11:34:52.067369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7c5b0 (9): Bad file descriptor 00:19:59.953 [2024-12-09 11:34:52.068371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:59.953 [2024-12-09 11:34:52.068379] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.953 [2024-12-09 11:34:52.068385] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:59.953 [2024-12-09 11:34:52.068393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:59.953 request: 00:19:59.953 { 00:19:59.953 "name": "TLSTEST", 00:19:59.953 "trtype": "tcp", 00:19:59.953 "traddr": "10.0.0.2", 00:19:59.953 "adrfam": "ipv4", 00:19:59.953 "trsvcid": "4420", 00:19:59.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.953 "prchk_reftag": false, 00:19:59.953 "prchk_guard": false, 00:19:59.953 "hdgst": false, 00:19:59.953 "ddgst": false, 00:19:59.953 "psk": "key0", 00:19:59.953 "allow_unrecognized_csi": false, 00:19:59.953 "method": "bdev_nvme_attach_controller", 00:19:59.953 "req_id": 1 00:19:59.953 } 00:19:59.954 Got JSON-RPC error response 00:19:59.954 response: 00:19:59.954 { 00:19:59.954 "code": -5, 00:19:59.954 "message": "Input/output error" 00:19:59.954 } 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3542761 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3542761 ']' 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3542761 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.954 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542761 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542761' 00:20:00.214 killing process with pid 3542761 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3542761 00:20:00.214 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.214 00:20:00.214 Latency(us) 00:20:00.214 [2024-12-09T10:34:52.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.214 [2024-12-09T10:34:52.376Z] =================================================================================================================== 00:20:00.214 [2024-12-09T10:34:52.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3542761 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFK73LExL7 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFK73LExL7 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFK73LExL7 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gFK73LExL7 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3542983 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3542983 /var/tmp/bdevperf.sock 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3542983 ']' 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.214 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.214 [2024-12-09 11:34:52.294444] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:00.214 [2024-12-09 11:34:52.294501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542983 ] 00:20:00.214 [2024-12-09 11:34:52.352651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.475 [2024-12-09 11:34:52.381294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.475 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.475 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.475 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gFK73LExL7 00:20:00.475 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:00.735 [2024-12-09 11:34:52.759611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.735 [2024-12-09 11:34:52.767656] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:00.735 [2024-12-09 11:34:52.767675] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:00.735 [2024-12-09 11:34:52.767693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.735 [2024-12-09 11:34:52.767811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7475b0 (107): Transport endpoint is not connected 00:20:00.735 [2024-12-09 11:34:52.768798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7475b0 (9): Bad file descriptor 00:20:00.735 [2024-12-09 11:34:52.769801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:00.735 [2024-12-09 11:34:52.769808] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.735 [2024-12-09 11:34:52.769813] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:00.735 [2024-12-09 11:34:52.769822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:00.735 request: 00:20:00.735 { 00:20:00.735 "name": "TLSTEST", 00:20:00.735 "trtype": "tcp", 00:20:00.735 "traddr": "10.0.0.2", 00:20:00.735 "adrfam": "ipv4", 00:20:00.735 "trsvcid": "4420", 00:20:00.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.735 "prchk_reftag": false, 00:20:00.735 "prchk_guard": false, 00:20:00.735 "hdgst": false, 00:20:00.735 "ddgst": false, 00:20:00.735 "psk": "key0", 00:20:00.735 "allow_unrecognized_csi": false, 00:20:00.735 "method": "bdev_nvme_attach_controller", 00:20:00.735 "req_id": 1 00:20:00.735 } 00:20:00.735 Got JSON-RPC error response 00:20:00.735 response: 00:20:00.735 { 00:20:00.735 "code": -5, 00:20:00.735 "message": "Input/output error" 00:20:00.735 } 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3542983 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3542983 ']' 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3542983 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542983 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542983' 00:20:00.735 killing process with pid 3542983 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3542983 00:20:00.735 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.735 00:20:00.735 Latency(us) 00:20:00.735 [2024-12-09T10:34:52.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.735 [2024-12-09T10:34:52.897Z] =================================================================================================================== 00:20:00.735 [2024-12-09T10:34:52.897Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.735 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3542983 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFK73LExL7 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFK73LExL7 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFK73LExL7 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gFK73LExL7 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3543114 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3543114 /var/tmp/bdevperf.sock 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3543114 ']' 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.995 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.995 [2024-12-09 11:34:52.999152] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:00.995 [2024-12-09 11:34:52.999208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543114 ] 00:20:00.995 [2024-12-09 11:34:53.058358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.995 [2024-12-09 11:34:53.086162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.255 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.255 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.255 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gFK73LExL7 00:20:01.255 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.516 [2024-12-09 11:34:53.508534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.516 [2024-12-09 11:34:53.519481] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:01.516 [2024-12-09 11:34:53.519503] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:01.516 [2024-12-09 11:34:53.519520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:01.516 [2024-12-09 11:34:53.519794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b975b0 (107): Transport endpoint is not connected 00:20:01.516 [2024-12-09 11:34:53.520790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b975b0 (9): Bad file descriptor 00:20:01.516 [2024-12-09 11:34:53.521792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:01.516 [2024-12-09 11:34:53.521800] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:01.516 [2024-12-09 11:34:53.521807] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:01.516 [2024-12-09 11:34:53.521814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:01.516 request: 00:20:01.516 { 00:20:01.516 "name": "TLSTEST", 00:20:01.516 "trtype": "tcp", 00:20:01.516 "traddr": "10.0.0.2", 00:20:01.516 "adrfam": "ipv4", 00:20:01.516 "trsvcid": "4420", 00:20:01.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.516 "prchk_reftag": false, 00:20:01.516 "prchk_guard": false, 00:20:01.516 "hdgst": false, 00:20:01.516 "ddgst": false, 00:20:01.516 "psk": "key0", 00:20:01.516 "allow_unrecognized_csi": false, 00:20:01.516 "method": "bdev_nvme_attach_controller", 00:20:01.516 "req_id": 1 00:20:01.516 } 00:20:01.516 Got JSON-RPC error response 00:20:01.516 response: 00:20:01.516 { 00:20:01.516 "code": -5, 00:20:01.516 "message": "Input/output error" 00:20:01.516 } 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3543114 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3543114 ']' 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3543114 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543114 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543114' 00:20:01.516 killing process with pid 3543114 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3543114 00:20:01.516 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.516 00:20:01.516 Latency(us) 00:20:01.516 [2024-12-09T10:34:53.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.516 [2024-12-09T10:34:53.678Z] =================================================================================================================== 00:20:01.516 [2024-12-09T10:34:53.678Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.516 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3543114 00:20:01.777 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:01.777 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:01.777 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.777 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3543211 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3543211 /var/tmp/bdevperf.sock 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3543211 ']' 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.778 [2024-12-09 11:34:53.767095] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:01.778 [2024-12-09 11:34:53.767150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543211 ] 00:20:01.778 [2024-12-09 11:34:53.826242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.778 [2024-12-09 11:34:53.854450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.778 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:02.038 [2024-12-09 11:34:54.084301] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:02.038 [2024-12-09 11:34:54.084325] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:02.038 request: 00:20:02.038 { 00:20:02.038 "name": "key0", 00:20:02.038 "path": "", 00:20:02.038 "method": "keyring_file_add_key", 00:20:02.038 "req_id": 1 00:20:02.038 } 00:20:02.038 Got JSON-RPC error response 00:20:02.038 response: 00:20:02.038 { 00:20:02.038 "code": -1, 00:20:02.038 "message": "Operation not permitted" 00:20:02.038 } 00:20:02.038 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.299 [2024-12-09 11:34:54.252806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.299 [2024-12-09 11:34:54.252835] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:02.299 request: 00:20:02.299 { 00:20:02.299 "name": "TLSTEST", 00:20:02.299 "trtype": "tcp", 00:20:02.299 "traddr": "10.0.0.2", 00:20:02.299 "adrfam": "ipv4", 00:20:02.299 "trsvcid": "4420", 00:20:02.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.299 "prchk_reftag": false, 00:20:02.299 "prchk_guard": false, 00:20:02.299 "hdgst": false, 00:20:02.299 "ddgst": false, 00:20:02.299 "psk": "key0", 00:20:02.299 "allow_unrecognized_csi": false, 00:20:02.299 "method": "bdev_nvme_attach_controller", 00:20:02.299 "req_id": 1 00:20:02.299 } 00:20:02.299 Got JSON-RPC error response 00:20:02.299 response: 00:20:02.299 { 00:20:02.299 "code": -126, 00:20:02.299 "message": "Required key not available" 00:20:02.299 } 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3543211 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3543211 ']' 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3543211 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543211 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543211' 00:20:02.299 killing process with pid 3543211 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3543211 00:20:02.299 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.299 00:20:02.299 Latency(us) 00:20:02.299 [2024-12-09T10:34:54.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.299 [2024-12-09T10:34:54.461Z] =================================================================================================================== 00:20:02.299 [2024-12-09T10:34:54.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3543211 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3537716 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3537716 ']' 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3537716 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.299 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3537716 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3537716' 00:20:02.561 killing process with pid 3537716 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3537716 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3537716 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.GONscGFP1k 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.GONscGFP1k 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3543474 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3543474 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3543474 ']' 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.561 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.823 [2024-12-09 11:34:54.721666] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:02.823 [2024-12-09 11:34:54.721719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.823 [2024-12-09 11:34:54.811246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.823 [2024-12-09 11:34:54.840645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.823 [2024-12-09 11:34:54.840675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.823 [2024-12-09 11:34:54.840680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.823 [2024-12-09 11:34:54.840685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.823 [2024-12-09 11:34:54.840692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.823 [2024-12-09 11:34:54.841155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GONscGFP1k 00:20:03.396 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.657 [2024-12-09 11:34:55.690494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.657 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.917 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.917 [2024-12-09 11:34:56.027320] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.917 [2024-12-09 11:34:56.027523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.917 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.180 malloc0 00:20:04.180 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.441 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:04.441 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GONscGFP1k 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GONscGFP1k 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3543841 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3543841 /var/tmp/bdevperf.sock 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3543841 ']' 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.702 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.702 [2024-12-09 11:34:56.765963] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:04.702 [2024-12-09 11:34:56.766026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543841 ] 00:20:04.702 [2024-12-09 11:34:56.825145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.702 [2024-12-09 11:34:56.854076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.963 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.963 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.963 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:04.963 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.223 [2024-12-09 11:34:57.248556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.223 TLSTESTn1 00:20:05.223 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.482 Running I/O for 10 seconds... 00:20:07.364 4256.00 IOPS, 16.62 MiB/s [2024-12-09T10:35:00.468Z] 4598.50 IOPS, 17.96 MiB/s [2024-12-09T10:35:01.852Z] 4758.33 IOPS, 18.59 MiB/s [2024-12-09T10:35:02.791Z] 4807.00 IOPS, 18.78 MiB/s [2024-12-09T10:35:03.733Z] 4642.60 IOPS, 18.14 MiB/s [2024-12-09T10:35:04.674Z] 4712.50 IOPS, 18.41 MiB/s [2024-12-09T10:35:05.618Z] 4732.43 IOPS, 18.49 MiB/s [2024-12-09T10:35:06.560Z] 4754.50 IOPS, 18.57 MiB/s [2024-12-09T10:35:07.504Z] 4772.89 IOPS, 18.64 MiB/s [2024-12-09T10:35:07.504Z] 4729.00 IOPS, 18.47 MiB/s 00:20:15.342 Latency(us) 00:20:15.342 [2024-12-09T10:35:07.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.342 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.342 Verification LBA range: start 0x0 length 0x2000 00:20:15.342 TLSTESTn1 : 10.02 4730.88 18.48 0.00 0.00 27011.49 5188.27 81701.55 00:20:15.342 [2024-12-09T10:35:07.504Z] =================================================================================================================== 00:20:15.342 [2024-12-09T10:35:07.504Z] Total : 4730.88 18.48 0.00 0.00 27011.49 5188.27 81701.55 00:20:15.342 { 00:20:15.342 "results": [ 00:20:15.342 { 00:20:15.342 "job": "TLSTESTn1", 00:20:15.342 "core_mask": "0x4", 00:20:15.342 "workload": "verify", 00:20:15.342 "status": "finished", 00:20:15.342 "verify_range": { 00:20:15.342 "start": 0, 00:20:15.342 "length": 8192 00:20:15.342 }, 00:20:15.342 "queue_depth": 128, 00:20:15.342 "io_size": 4096, 00:20:15.342 "runtime": 10.022869, 00:20:15.342 "iops": 4730.880948359198, 00:20:15.342 "mibps": 18.480003704528116, 00:20:15.342 "io_failed": 0, 00:20:15.342 "io_timeout": 0, 00:20:15.342 "avg_latency_us": 27011.49039374064, 00:20:15.342 "min_latency_us": 5188.266666666666, 00:20:15.342 "max_latency_us": 81701.54666666666 00:20:15.342 } 00:20:15.342 ], 00:20:15.342 "core_count": 1 00:20:15.342 } 00:20:15.342 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.342 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3543841 00:20:15.342 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3543841 ']' 00:20:15.342 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3543841 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543841 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543841' 00:20:15.603 killing process with pid 3543841 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3543841 00:20:15.603 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.603 00:20:15.603 Latency(us) 00:20:15.603 [2024-12-09T10:35:07.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.603 [2024-12-09T10:35:07.765Z] =================================================================================================================== 00:20:15.603 [2024-12-09T10:35:07.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3543841 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.GONscGFP1k 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GONscGFP1k 00:20:15.603 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GONscGFP1k 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GONscGFP1k 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GONscGFP1k 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3546025 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3546025 /var/tmp/bdevperf.sock 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3546025 ']' 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.604 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.604 [2024-12-09 11:35:07.729168] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:15.604 [2024-12-09 11:35:07.729226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546025 ] 00:20:15.864 [2024-12-09 11:35:07.787664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.864 [2024-12-09 11:35:07.816297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.864 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.864 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.864 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:16.123 [2024-12-09 11:35:08.042058] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GONscGFP1k': 0100666 00:20:16.123 [2024-12-09 11:35:08.042078] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:16.123 request: 00:20:16.123 { 00:20:16.123 "name": "key0", 00:20:16.123 "path": "/tmp/tmp.GONscGFP1k", 00:20:16.123 "method": "keyring_file_add_key", 00:20:16.123 "req_id": 1 00:20:16.123 } 00:20:16.123 Got JSON-RPC error response 00:20:16.123 response: 00:20:16.123 { 00:20:16.123 "code": -1, 00:20:16.123 "message": "Operation not permitted" 00:20:16.123 } 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.123 [2024-12-09 11:35:08.210549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.123 [2024-12-09 11:35:08.210579] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:16.123 request: 00:20:16.123 { 00:20:16.123 "name": "TLSTEST", 00:20:16.123 "trtype": "tcp", 00:20:16.123 "traddr": "10.0.0.2", 00:20:16.123 "adrfam": "ipv4", 00:20:16.123 "trsvcid": "4420", 00:20:16.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.123 "prchk_reftag": false, 00:20:16.123 "prchk_guard": false, 00:20:16.123 "hdgst": false, 00:20:16.123 "ddgst": false, 00:20:16.123 "psk": "key0", 00:20:16.123 "allow_unrecognized_csi": false, 00:20:16.123 "method": "bdev_nvme_attach_controller", 00:20:16.123 "req_id": 1 00:20:16.123 } 00:20:16.123 Got JSON-RPC error response 00:20:16.123 response: 00:20:16.123 { 00:20:16.123 "code": -126, 00:20:16.123 "message": "Required key not available" 00:20:16.123 } 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3546025 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3546025 ']' 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3546025 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.123 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546025 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546025' 00:20:16.383 killing process with pid 3546025 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3546025 00:20:16.383 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.383 00:20:16.383 Latency(us) 00:20:16.383 [2024-12-09T10:35:08.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.383 [2024-12-09T10:35:08.545Z] =================================================================================================================== 00:20:16.383 [2024-12-09T10:35:08.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3546025 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3543474 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3543474 ']' 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3543474 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3543474 00:20:16.383 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.384 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.384 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3543474' 00:20:16.384 killing process with pid 3543474 00:20:16.384 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3543474 00:20:16.384 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3543474 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3546204 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3546204 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3546204 ']' 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.644 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.644 [2024-12-09 11:35:08.631993] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:16.644 [2024-12-09 11:35:08.632079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.644 [2024-12-09 11:35:08.725322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.644 [2024-12-09 11:35:08.753522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.644 [2024-12-09 11:35:08.753550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.644 [2024-12-09 11:35:08.753556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.644 [2024-12-09 11:35:08.753561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.644 [2024-12-09 11:35:08.753565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.644 [2024-12-09 11:35:08.754038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GONscGFP1k 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.584 [2024-12-09 11:35:09.622972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.584 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.847 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.847 [2024-12-09 11:35:09.955783] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.847 [2024-12-09 11:35:09.955991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.847 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.108 malloc0 00:20:18.108 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.369 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:18.369 [2024-12-09 11:35:10.450807] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GONscGFP1k': 0100666 00:20:18.369 [2024-12-09 11:35:10.450828] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:18.369 request: 00:20:18.369 { 00:20:18.369 "name": "key0", 00:20:18.369 "path": "/tmp/tmp.GONscGFP1k", 00:20:18.369 "method": "keyring_file_add_key", 00:20:18.369 "req_id": 1 00:20:18.369 } 00:20:18.369 Got JSON-RPC error response 00:20:18.369 response: 00:20:18.369 { 00:20:18.369 "code": -1, 00:20:18.369 "message": "Operation not permitted" 00:20:18.369 } 00:20:18.369 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.629 [2024-12-09 11:35:10.615229] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:18.629 [2024-12-09 11:35:10.615257] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:18.629 request: 00:20:18.629 { 00:20:18.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.629 "host": "nqn.2016-06.io.spdk:host1", 00:20:18.629 "psk": "key0", 00:20:18.629 "method": "nvmf_subsystem_add_host", 00:20:18.629 "req_id": 1 00:20:18.629 } 00:20:18.629 Got JSON-RPC error response 00:20:18.629 response: 00:20:18.629 { 00:20:18.629 "code": -32603, 00:20:18.629 "message": "Internal error" 00:20:18.629 } 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3546204 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3546204 ']' 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3546204 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546204 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546204' 00:20:18.629 killing process with pid 3546204 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3546204 00:20:18.629 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3546204 00:20:18.888 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.GONscGFP1k 00:20:18.888 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3546603 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3546603 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3546603 ']' 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.889 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.889 [2024-12-09 11:35:10.866599] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:18.889 [2024-12-09 11:35:10.866656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.889 [2024-12-09 11:35:10.956631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.889 [2024-12-09 11:35:10.985425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.889 [2024-12-09 11:35:10.985455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.889 [2024-12-09 11:35:10.985461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.889 [2024-12-09 11:35:10.985465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.889 [2024-12-09 11:35:10.985469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.889 [2024-12-09 11:35:10.985955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GONscGFP1k 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.828 [2024-12-09 11:35:11.846944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.828 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:20.088 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:20.088 [2024-12-09 11:35:12.163715] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.088 [2024-12-09 11:35:12.163902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.088 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.347 malloc0 00:20:20.347 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.348 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:20.607 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3547110 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3547110 /var/tmp/bdevperf.sock 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3547110 ']' 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.867 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.867 [2024-12-09 11:35:12.836341] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:20.867 [2024-12-09 11:35:12.836395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547110 ] 00:20:20.867 [2024-12-09 11:35:12.895527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.867 [2024-12-09 11:35:12.924485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.867 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.867 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.867 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:21.127 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.386 [2024-12-09 11:35:13.334916] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.386 TLSTESTn1 00:20:21.386 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:21.647 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:21.647 "subsystems": [ 00:20:21.647 { 00:20:21.647 "subsystem": "keyring", 00:20:21.647 "config": [ 00:20:21.647 { 00:20:21.647 "method": "keyring_file_add_key", 00:20:21.647 "params": { 00:20:21.647 "name": "key0", 00:20:21.647 "path": "/tmp/tmp.GONscGFP1k" 00:20:21.647 } 00:20:21.647 } 00:20:21.647 ] 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "subsystem": "iobuf", 00:20:21.647 "config": [ 00:20:21.647 { 00:20:21.647 "method": "iobuf_set_options", 00:20:21.647 "params": { 00:20:21.647 "small_pool_count": 8192, 00:20:21.647 "large_pool_count": 1024, 00:20:21.647 "small_bufsize": 8192, 00:20:21.647 "large_bufsize": 135168, 00:20:21.647 "enable_numa": false 00:20:21.647 } 00:20:21.647 } 00:20:21.647 ] 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "subsystem": "sock", 00:20:21.647 "config": [ 00:20:21.647 { 00:20:21.647 "method": "sock_set_default_impl", 00:20:21.647 "params": { 00:20:21.647 "impl_name": "posix" 00:20:21.647 } 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "method": "sock_impl_set_options", 00:20:21.647 "params": { 00:20:21.647 "impl_name": "ssl", 00:20:21.647 "recv_buf_size": 4096, 00:20:21.647 "send_buf_size": 4096, 00:20:21.647 "enable_recv_pipe": true, 00:20:21.647 "enable_quickack": false, 00:20:21.647 "enable_placement_id": 0, 00:20:21.647 "enable_zerocopy_send_server": true, 00:20:21.647 "enable_zerocopy_send_client": false, 00:20:21.647 "zerocopy_threshold": 0, 00:20:21.647 "tls_version": 0, 00:20:21.647 "enable_ktls": false 00:20:21.647 } 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "method": "sock_impl_set_options", 00:20:21.647 "params": { 00:20:21.647 "impl_name": "posix", 00:20:21.647 "recv_buf_size": 2097152, 00:20:21.647 "send_buf_size": 2097152, 00:20:21.647 "enable_recv_pipe": true, 00:20:21.647 "enable_quickack": false, 00:20:21.647 "enable_placement_id": 0, 00:20:21.647 "enable_zerocopy_send_server": true, 00:20:21.647 "enable_zerocopy_send_client": false, 00:20:21.647 "zerocopy_threshold": 0, 00:20:21.647 "tls_version": 0, 00:20:21.647 "enable_ktls": false 00:20:21.647 } 00:20:21.647 } 00:20:21.647 ] 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "subsystem": "vmd", 00:20:21.647 "config": [] 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "subsystem": "accel", 00:20:21.647 "config": [ 00:20:21.647 { 00:20:21.647 "method": "accel_set_options", 00:20:21.647 "params": { 00:20:21.647 "small_cache_size": 128, 00:20:21.647 "large_cache_size": 16, 00:20:21.647 "task_count": 2048, 00:20:21.647 "sequence_count": 2048, 00:20:21.647 "buf_count": 2048 00:20:21.647 } 00:20:21.647 } 00:20:21.647 ] 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "subsystem": "bdev", 00:20:21.647 "config": [ 00:20:21.647 { 00:20:21.647 "method": "bdev_set_options", 00:20:21.647 "params": { 00:20:21.647 "bdev_io_pool_size": 65535, 00:20:21.647 "bdev_io_cache_size": 256, 00:20:21.647 "bdev_auto_examine": true, 00:20:21.647 "iobuf_small_cache_size": 128, 00:20:21.647 "iobuf_large_cache_size": 16 00:20:21.647 } 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "method": "bdev_raid_set_options", 00:20:21.647 "params": { 00:20:21.647 "process_window_size_kb": 1024, 00:20:21.647 "process_max_bandwidth_mb_sec": 0 00:20:21.647 } 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "method": "bdev_iscsi_set_options", 00:20:21.647 "params": { 00:20:21.647 "timeout_sec": 30 00:20:21.647 } 00:20:21.647 }, 00:20:21.647 { 00:20:21.647 "method": "bdev_nvme_set_options", 00:20:21.647 "params": { 00:20:21.647 "action_on_timeout": "none", 00:20:21.647 "timeout_us": 0, 00:20:21.647 "timeout_admin_us": 0, 00:20:21.647 "keep_alive_timeout_ms": 10000, 00:20:21.647 "arbitration_burst": 0, 00:20:21.647 "low_priority_weight": 0, 00:20:21.647 "medium_priority_weight": 0, 00:20:21.647 "high_priority_weight": 0, 00:20:21.647 "nvme_adminq_poll_period_us": 10000, 00:20:21.647 "nvme_ioq_poll_period_us": 0, 00:20:21.647 "io_queue_requests": 0, 00:20:21.647 "delay_cmd_submit": true, 00:20:21.647 "transport_retry_count": 4, 00:20:21.647 "bdev_retry_count": 3, 00:20:21.647 "transport_ack_timeout": 0, 00:20:21.647 "ctrlr_loss_timeout_sec": 0, 00:20:21.647 "reconnect_delay_sec": 0, 00:20:21.647 "fast_io_fail_timeout_sec": 0, 00:20:21.647 "disable_auto_failback": false, 00:20:21.647 "generate_uuids": false, 00:20:21.647 "transport_tos": 0, 00:20:21.647 "nvme_error_stat": false, 00:20:21.647 "rdma_srq_size": 0, 00:20:21.647 "io_path_stat": false, 00:20:21.647 "allow_accel_sequence": false, 00:20:21.647 "rdma_max_cq_size": 0, 00:20:21.648 "rdma_cm_event_timeout_ms": 0, 00:20:21.648 "dhchap_digests": [ 00:20:21.648 "sha256", 00:20:21.648 "sha384", 00:20:21.648 "sha512" 00:20:21.648 ], 00:20:21.648 "dhchap_dhgroups": [ 00:20:21.648 "null", 00:20:21.648 "ffdhe2048", 00:20:21.648 "ffdhe3072", 00:20:21.648 "ffdhe4096", 00:20:21.648 "ffdhe6144", 00:20:21.648 "ffdhe8192" 00:20:21.648 ] 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "bdev_nvme_set_hotplug", 00:20:21.648 "params": { 00:20:21.648 "period_us": 100000, 00:20:21.648 "enable": false 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "bdev_malloc_create", 00:20:21.648 "params": { 00:20:21.648 "name": "malloc0", 00:20:21.648 "num_blocks": 8192, 00:20:21.648 "block_size": 4096, 00:20:21.648 "physical_block_size": 4096, 00:20:21.648 "uuid": "222a71cf-6524-422c-9f29-cf8d7a15b68f", 00:20:21.648 "optimal_io_boundary": 0, 00:20:21.648 "md_size": 0, 00:20:21.648 "dif_type": 0, 00:20:21.648 "dif_is_head_of_md": false, 00:20:21.648 "dif_pi_format": 0 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "bdev_wait_for_examine" 00:20:21.648 } 00:20:21.648 ] 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "subsystem": "nbd", 00:20:21.648 "config": [] 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "subsystem": "scheduler", 00:20:21.648 "config": [ 00:20:21.648 { 00:20:21.648 "method": "framework_set_scheduler", 00:20:21.648 "params": { 00:20:21.648 "name": "static" 00:20:21.648 } 00:20:21.648 } 00:20:21.648 ] 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "subsystem": "nvmf", 00:20:21.648 "config": [ 00:20:21.648 { 00:20:21.648 "method": "nvmf_set_config", 00:20:21.648 "params": { 00:20:21.648 "discovery_filter": "match_any", 00:20:21.648 "admin_cmd_passthru": { 00:20:21.648 "identify_ctrlr": false 00:20:21.648 }, 00:20:21.648 "dhchap_digests": [ 00:20:21.648 "sha256", 00:20:21.648 "sha384", 00:20:21.648 "sha512" 00:20:21.648 ], 00:20:21.648 "dhchap_dhgroups": [ 00:20:21.648 "null", 00:20:21.648 "ffdhe2048", 00:20:21.648 "ffdhe3072", 00:20:21.648 "ffdhe4096", 00:20:21.648 "ffdhe6144", 00:20:21.648 "ffdhe8192" 00:20:21.648 ] 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_set_max_subsystems", 00:20:21.648 "params": { 00:20:21.648 "max_subsystems": 1024 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_set_crdt", 00:20:21.648 "params": { 00:20:21.648 "crdt1": 0, 00:20:21.648 "crdt2": 0, 00:20:21.648 "crdt3": 0 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_create_transport", 00:20:21.648 "params": { 00:20:21.648 "trtype": "TCP", 00:20:21.648 "max_queue_depth": 128, 00:20:21.648 "max_io_qpairs_per_ctrlr": 127, 00:20:21.648 "in_capsule_data_size": 4096, 00:20:21.648 "max_io_size": 131072, 00:20:21.648 "io_unit_size": 131072, 00:20:21.648 "max_aq_depth": 128, 00:20:21.648 "num_shared_buffers": 511, 00:20:21.648 "buf_cache_size": 4294967295, 00:20:21.648 "dif_insert_or_strip": false, 00:20:21.648 "zcopy": false, 00:20:21.648 "c2h_success": false, 00:20:21.648 "sock_priority": 0, 00:20:21.648 "abort_timeout_sec": 1, 00:20:21.648 "ack_timeout": 0, 00:20:21.648 "data_wr_pool_size": 0 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_create_subsystem", 00:20:21.648 "params": { 00:20:21.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.648 "allow_any_host": false, 00:20:21.648 "serial_number": "SPDK00000000000001", 00:20:21.648 "model_number": "SPDK bdev Controller", 00:20:21.648 "max_namespaces": 10, 00:20:21.648 "min_cntlid": 1, 00:20:21.648 "max_cntlid": 65519, 00:20:21.648 "ana_reporting": false 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_subsystem_add_host", 00:20:21.648 "params": { 00:20:21.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.648 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.648 "psk": "key0" 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_subsystem_add_ns", 00:20:21.648 "params": { 00:20:21.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.648 "namespace": { 00:20:21.648 "nsid": 1, 00:20:21.648 "bdev_name": "malloc0", 00:20:21.648 "nguid": "222A71CF6524422C9F29CF8D7A15B68F", 00:20:21.648 "uuid": "222a71cf-6524-422c-9f29-cf8d7a15b68f", 00:20:21.648 "no_auto_visible": false 00:20:21.648 } 00:20:21.648 } 00:20:21.648 }, 00:20:21.648 { 00:20:21.648 "method": "nvmf_subsystem_add_listener", 00:20:21.648 "params": { 00:20:21.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.648 "listen_address": { 00:20:21.648 "trtype": "TCP", 00:20:21.648 "adrfam": "IPv4", 00:20:21.648 "traddr": "10.0.0.2", 00:20:21.648 "trsvcid": "4420" 00:20:21.648 }, 00:20:21.648 "secure_channel": true 00:20:21.648 } 00:20:21.648 } 00:20:21.648 ] 00:20:21.648 } 00:20:21.648 ] 00:20:21.648 }' 00:20:21.648 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:21.909 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:21.909 "subsystems": [ 00:20:21.909 { 00:20:21.909 "subsystem": "keyring", 00:20:21.909 "config": [ 00:20:21.909 { 00:20:21.909 "method": "keyring_file_add_key", 00:20:21.909 "params": { 00:20:21.909 "name": "key0", 00:20:21.909 "path": "/tmp/tmp.GONscGFP1k" 00:20:21.909 } 00:20:21.909 } 00:20:21.909 ] 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "subsystem": "iobuf", 00:20:21.909 "config": [ 00:20:21.909 { 00:20:21.909 "method": "iobuf_set_options", 00:20:21.909 "params": { 00:20:21.909 "small_pool_count": 8192, 00:20:21.909 "large_pool_count": 1024, 00:20:21.909 "small_bufsize": 8192, 00:20:21.909 "large_bufsize": 135168, 00:20:21.909 "enable_numa": false 00:20:21.909 } 00:20:21.909 } 00:20:21.909 ] 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "subsystem": "sock", 00:20:21.909 "config": [ 00:20:21.909 { 00:20:21.909 "method": "sock_set_default_impl", 00:20:21.909 "params": { 00:20:21.909 "impl_name": "posix" 00:20:21.909 } 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "method": "sock_impl_set_options", 00:20:21.909 "params": { 00:20:21.909 "impl_name": "ssl", 00:20:21.909 "recv_buf_size": 4096, 00:20:21.909 "send_buf_size": 4096, 00:20:21.909 "enable_recv_pipe": true, 00:20:21.909 "enable_quickack": false, 00:20:21.909 "enable_placement_id": 0, 00:20:21.909 "enable_zerocopy_send_server": true, 00:20:21.909 "enable_zerocopy_send_client": false, 00:20:21.909 "zerocopy_threshold": 0, 00:20:21.909 "tls_version": 0, 00:20:21.909 "enable_ktls": false 00:20:21.909 } 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "method": "sock_impl_set_options", 00:20:21.909 "params": { 00:20:21.909 "impl_name": "posix", 00:20:21.909 "recv_buf_size": 2097152, 00:20:21.909 "send_buf_size": 2097152, 00:20:21.909 "enable_recv_pipe": true, 00:20:21.909 "enable_quickack": false, 00:20:21.909 "enable_placement_id": 0, 00:20:21.909 "enable_zerocopy_send_server": true, 00:20:21.909 "enable_zerocopy_send_client": false, 00:20:21.909 "zerocopy_threshold": 0, 00:20:21.909 "tls_version": 0, 00:20:21.909 "enable_ktls": false 00:20:21.909 } 00:20:21.909 } 00:20:21.909 ] 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "subsystem": "vmd", 00:20:21.909 "config": [] 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "subsystem": "accel", 00:20:21.909 "config": [ 00:20:21.909 { 00:20:21.909 "method": "accel_set_options", 00:20:21.909 "params": { 00:20:21.909 "small_cache_size": 128, 00:20:21.909 "large_cache_size": 16, 00:20:21.909 "task_count": 2048, 00:20:21.909 "sequence_count": 2048, 00:20:21.909 "buf_count": 2048 00:20:21.909 } 00:20:21.909 } 00:20:21.909 ] 00:20:21.909 }, 00:20:21.909 { 00:20:21.909 "subsystem": "bdev", 00:20:21.909 "config": [ 00:20:21.909 { 00:20:21.909 "method": "bdev_set_options", 00:20:21.909 "params": { 00:20:21.909 "bdev_io_pool_size": 65535, 00:20:21.909 "bdev_io_cache_size": 256, 00:20:21.909 "bdev_auto_examine": true, 00:20:21.910 "iobuf_small_cache_size": 128, 00:20:21.910 "iobuf_large_cache_size": 16 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_raid_set_options", 00:20:21.910 "params": { 00:20:21.910 "process_window_size_kb": 1024, 00:20:21.910 "process_max_bandwidth_mb_sec": 0 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_iscsi_set_options", 00:20:21.910 "params": { 00:20:21.910 "timeout_sec": 30 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_nvme_set_options", 00:20:21.910 "params": { 00:20:21.910 "action_on_timeout": "none", 00:20:21.910 "timeout_us": 0, 00:20:21.910 "timeout_admin_us": 0, 00:20:21.910 "keep_alive_timeout_ms": 10000, 00:20:21.910 "arbitration_burst": 0, 00:20:21.910 "low_priority_weight": 0, 00:20:21.910 "medium_priority_weight": 0, 00:20:21.910 "high_priority_weight": 0, 00:20:21.910 "nvme_adminq_poll_period_us": 10000, 00:20:21.910 "nvme_ioq_poll_period_us": 0, 00:20:21.910 "io_queue_requests": 512, 00:20:21.910 "delay_cmd_submit": true, 00:20:21.910 "transport_retry_count": 4, 00:20:21.910 "bdev_retry_count": 3, 00:20:21.910 "transport_ack_timeout": 0, 00:20:21.910 "ctrlr_loss_timeout_sec": 0, 00:20:21.910 "reconnect_delay_sec": 0, 00:20:21.910 "fast_io_fail_timeout_sec": 0, 00:20:21.910 "disable_auto_failback": false, 00:20:21.910 "generate_uuids": false, 00:20:21.910 "transport_tos": 0, 00:20:21.910 "nvme_error_stat": false, 00:20:21.910 "rdma_srq_size": 0, 00:20:21.910 "io_path_stat": false, 00:20:21.910 "allow_accel_sequence": false, 00:20:21.910 "rdma_max_cq_size": 0, 00:20:21.910 "rdma_cm_event_timeout_ms": 0, 00:20:21.910 "dhchap_digests": [ 00:20:21.910 "sha256", 00:20:21.910 "sha384", 00:20:21.910 "sha512" 00:20:21.910 ], 00:20:21.910 "dhchap_dhgroups": [ 00:20:21.910 "null", 00:20:21.910 "ffdhe2048", 00:20:21.910 "ffdhe3072", 00:20:21.910 "ffdhe4096", 00:20:21.910 "ffdhe6144", 00:20:21.910 "ffdhe8192" 00:20:21.910 ] 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_nvme_attach_controller", 00:20:21.910 "params": { 00:20:21.910 "name": "TLSTEST", 00:20:21.910 "trtype": "TCP", 00:20:21.910 "adrfam": "IPv4", 00:20:21.910 "traddr": "10.0.0.2", 00:20:21.910 "trsvcid": "4420", 00:20:21.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.910 "prchk_reftag": false, 00:20:21.910 "prchk_guard": false, 00:20:21.910 "ctrlr_loss_timeout_sec": 0, 00:20:21.910 "reconnect_delay_sec": 0, 00:20:21.910 "fast_io_fail_timeout_sec": 0, 00:20:21.910 "psk": "key0", 00:20:21.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.910 "hdgst": false, 00:20:21.910 "ddgst": false, 00:20:21.910 "multipath": "multipath" 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_nvme_set_hotplug", 00:20:21.910 "params": { 00:20:21.910 "period_us": 100000, 00:20:21.910 "enable": false 00:20:21.910 } 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "method": "bdev_wait_for_examine" 00:20:21.910 } 00:20:21.910 ] 00:20:21.910 }, 00:20:21.910 { 00:20:21.910 "subsystem": "nbd", 00:20:21.910 "config": [] 00:20:21.910 } 00:20:21.910 ] 00:20:21.910 }' 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3547110 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3547110 ']' 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3547110 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.910 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547110 00:20:21.910 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:21.910 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:21.910 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547110' 00:20:21.910 killing process with pid 3547110 00:20:21.910 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3547110 00:20:21.910 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.910 00:20:21.910 Latency(us) 00:20:21.910 [2024-12-09T10:35:14.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.910 [2024-12-09T10:35:14.072Z] =================================================================================================================== 00:20:21.910 [2024-12-09T10:35:14.072Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.910 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3547110 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3546603 ']' 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546603' 00:20:22.170 killing process with pid 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3546603 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:22.170 "subsystems": [ 00:20:22.170 { 00:20:22.170 "subsystem": "keyring", 00:20:22.170 "config": [ 00:20:22.170 { 00:20:22.170 "method": "keyring_file_add_key", 00:20:22.170 "params": { 00:20:22.170 "name": "key0", 00:20:22.170 "path": "/tmp/tmp.GONscGFP1k" 00:20:22.170 } 00:20:22.170 } 00:20:22.170 ] 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "subsystem": "iobuf", 00:20:22.170 "config": [ 00:20:22.170 { 00:20:22.170 "method": "iobuf_set_options", 00:20:22.170 "params": { 00:20:22.170 "small_pool_count": 8192, 00:20:22.170 "large_pool_count": 1024, 00:20:22.170 "small_bufsize": 8192, 00:20:22.170 "large_bufsize": 135168, 00:20:22.170 "enable_numa": false 00:20:22.170 } 00:20:22.170 } 00:20:22.170 ] 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "subsystem": "sock", 00:20:22.170 "config": [ 00:20:22.170 { 00:20:22.170 "method": "sock_set_default_impl", 00:20:22.170 "params": { 00:20:22.170 "impl_name": "posix" 00:20:22.170 } 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "method": "sock_impl_set_options", 00:20:22.170 "params": { 00:20:22.170 "impl_name": "ssl", 00:20:22.170 "recv_buf_size": 4096, 00:20:22.170 "send_buf_size": 4096, 00:20:22.170 "enable_recv_pipe": true, 00:20:22.170 "enable_quickack": false, 00:20:22.170 "enable_placement_id": 0, 00:20:22.170 "enable_zerocopy_send_server": true, 00:20:22.170 "enable_zerocopy_send_client": false, 00:20:22.170 "zerocopy_threshold": 0, 00:20:22.170 "tls_version": 0, 00:20:22.170 "enable_ktls": false 00:20:22.170 } 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "method": "sock_impl_set_options", 00:20:22.170 "params": { 00:20:22.170 "impl_name": "posix", 00:20:22.170 "recv_buf_size": 2097152, 00:20:22.170 "send_buf_size": 2097152, 00:20:22.170 "enable_recv_pipe": true, 00:20:22.170 "enable_quickack": false, 00:20:22.170 "enable_placement_id": 0, 00:20:22.170 "enable_zerocopy_send_server": true, 00:20:22.170 "enable_zerocopy_send_client": false, 00:20:22.170 "zerocopy_threshold": 0, 00:20:22.170 "tls_version": 0, 00:20:22.170 "enable_ktls": false 00:20:22.170 } 00:20:22.170 } 00:20:22.170 ] 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "subsystem": "vmd", 00:20:22.170 "config": [] 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "subsystem": "accel", 00:20:22.170 "config": [ 00:20:22.170 { 00:20:22.170 "method": "accel_set_options", 00:20:22.170 "params": { 00:20:22.170 "small_cache_size": 128, 00:20:22.170 "large_cache_size": 16, 00:20:22.170 "task_count": 2048, 00:20:22.170 "sequence_count": 2048, 00:20:22.170 "buf_count": 2048 00:20:22.170 } 00:20:22.170 } 00:20:22.170 ] 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "subsystem": "bdev", 00:20:22.170 "config": [ 00:20:22.170 { 00:20:22.170 "method": "bdev_set_options", 00:20:22.170 "params": { 00:20:22.170 "bdev_io_pool_size": 65535, 00:20:22.170 "bdev_io_cache_size": 256, 00:20:22.170 "bdev_auto_examine": true, 00:20:22.170 "iobuf_small_cache_size": 128, 00:20:22.170 "iobuf_large_cache_size": 16 00:20:22.170 } 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "method": "bdev_raid_set_options", 00:20:22.170 "params": { 00:20:22.170 "process_window_size_kb": 1024, 00:20:22.170 "process_max_bandwidth_mb_sec": 0 00:20:22.170 } 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "method": "bdev_iscsi_set_options", 00:20:22.170 "params": { 00:20:22.170 "timeout_sec": 30 00:20:22.170 } 00:20:22.170 }, 00:20:22.170 { 00:20:22.170 "method": "bdev_nvme_set_options", 00:20:22.170 "params": { 00:20:22.170 "action_on_timeout": "none", 00:20:22.170 "timeout_us": 0, 00:20:22.170 "timeout_admin_us": 0, 00:20:22.170 "keep_alive_timeout_ms": 10000, 00:20:22.170 "arbitration_burst": 0, 00:20:22.170 "low_priority_weight": 0, 00:20:22.170 "medium_priority_weight": 0, 00:20:22.170 "high_priority_weight": 0, 00:20:22.170 "nvme_adminq_poll_period_us": 10000, 00:20:22.170 "nvme_ioq_poll_period_us": 0, 00:20:22.170 "io_queue_requests": 0, 00:20:22.170 "delay_cmd_submit": true, 00:20:22.170 "transport_retry_count": 4, 00:20:22.170 "bdev_retry_count": 3, 00:20:22.170 "transport_ack_timeout": 0, 00:20:22.170 "ctrlr_loss_timeout_sec": 0, 00:20:22.170 "reconnect_delay_sec": 0, 00:20:22.170 "fast_io_fail_timeout_sec": 0, 00:20:22.170 "disable_auto_failback": false, 00:20:22.170 "generate_uuids": false, 00:20:22.170 "transport_tos": 0, 00:20:22.170 "nvme_error_stat": false, 00:20:22.171 "rdma_srq_size": 0, 00:20:22.171 "io_path_stat": false, 00:20:22.171 "allow_accel_sequence": false, 00:20:22.171 "rdma_max_cq_size": 0, 00:20:22.171 "rdma_cm_event_timeout_ms": 0, 00:20:22.171 "dhchap_digests": [ 00:20:22.171 "sha256", 00:20:22.171 "sha384", 00:20:22.171 "sha512" 00:20:22.171 ], 00:20:22.171 "dhchap_dhgroups": [ 00:20:22.171 "null", 00:20:22.171 "ffdhe2048", 00:20:22.171 "ffdhe3072", 00:20:22.171 "ffdhe4096", 00:20:22.171 "ffdhe6144", 00:20:22.171 "ffdhe8192" 00:20:22.171 ] 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "bdev_nvme_set_hotplug", 00:20:22.171 "params": { 00:20:22.171 "period_us": 100000, 00:20:22.171 "enable": false 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "bdev_malloc_create", 00:20:22.171 "params": { 00:20:22.171 "name": "malloc0", 00:20:22.171 "num_blocks": 8192, 00:20:22.171 "block_size": 4096, 00:20:22.171 "physical_block_size": 4096, 00:20:22.171 "uuid": "222a71cf-6524-422c-9f29-cf8d7a15b68f", 00:20:22.171 "optimal_io_boundary": 0, 00:20:22.171 "md_size": 0, 00:20:22.171 "dif_type": 0, 00:20:22.171 "dif_is_head_of_md": false, 00:20:22.171 "dif_pi_format": 0 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "bdev_wait_for_examine" 00:20:22.171 } 00:20:22.171 ] 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "subsystem": "nbd", 00:20:22.171 "config": [] 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "subsystem": "scheduler", 00:20:22.171 "config": [ 00:20:22.171 { 00:20:22.171 "method": "framework_set_scheduler", 00:20:22.171 "params": { 00:20:22.171 "name": "static" 00:20:22.171 } 00:20:22.171 } 00:20:22.171 ] 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "subsystem": "nvmf", 00:20:22.171 "config": [ 00:20:22.171 { 00:20:22.171 "method": "nvmf_set_config", 00:20:22.171 "params": { 00:20:22.171 "discovery_filter": "match_any", 00:20:22.171 "admin_cmd_passthru": { 00:20:22.171 "identify_ctrlr": false 00:20:22.171 }, 00:20:22.171 "dhchap_digests": [ 00:20:22.171 "sha256", 00:20:22.171 "sha384", 00:20:22.171 "sha512" 00:20:22.171 ], 00:20:22.171 "dhchap_dhgroups": [ 00:20:22.171 "null", 00:20:22.171 "ffdhe2048", 00:20:22.171 "ffdhe3072", 00:20:22.171 "ffdhe4096", 00:20:22.171 "ffdhe6144", 00:20:22.171 "ffdhe8192" 00:20:22.171 ] 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_set_max_subsystems", 00:20:22.171 "params": { 00:20:22.171 "max_subsystems": 1024 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_set_crdt", 00:20:22.171 "params": { 00:20:22.171 "crdt1": 0, 00:20:22.171 "crdt2": 0, 00:20:22.171 "crdt3": 0 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_create_transport", 00:20:22.171 "params": { 00:20:22.171 "trtype": "TCP", 00:20:22.171 "max_queue_depth": 128, 00:20:22.171 "max_io_qpairs_per_ctrlr": 127, 00:20:22.171 "in_capsule_data_size": 4096, 00:20:22.171 "max_io_size": 131072, 00:20:22.171 "io_unit_size": 131072, 00:20:22.171 "max_aq_depth": 128, 00:20:22.171 "num_shared_buffers": 511, 00:20:22.171 "buf_cache_size": 4294967295, 00:20:22.171 "dif_insert_or_strip": false, 00:20:22.171 "zcopy": false, 00:20:22.171 "c2h_success": false, 00:20:22.171 "sock_priority": 0, 00:20:22.171 "abort_timeout_sec": 1, 00:20:22.171 "ack_timeout": 0, 00:20:22.171 "data_wr_pool_size": 0 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_create_subsystem", 00:20:22.171 "params": { 00:20:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.171 "allow_any_host": false, 00:20:22.171 "serial_number": "SPDK00000000000001", 00:20:22.171 "model_number": "SPDK bdev Controller", 00:20:22.171 "max_namespaces": 10, 00:20:22.171 "min_cntlid": 1, 00:20:22.171 "max_cntlid": 65519, 00:20:22.171 "ana_reporting": false 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_subsystem_add_host", 00:20:22.171 "params": { 00:20:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.171 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.171 "psk": "key0" 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_subsystem_add_ns", 00:20:22.171 "params": { 00:20:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.171 "namespace": { 00:20:22.171 "nsid": 1, 00:20:22.171 "bdev_name": "malloc0", 00:20:22.171 "nguid": "222A71CF6524422C9F29CF8D7A15B68F", 00:20:22.171 "uuid": "222a71cf-6524-422c-9f29-cf8d7a15b68f", 00:20:22.171 "no_auto_visible": false 00:20:22.171 } 00:20:22.171 } 00:20:22.171 }, 00:20:22.171 { 00:20:22.171 "method": "nvmf_subsystem_add_listener", 00:20:22.171 "params": { 00:20:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.171 "listen_address": { 00:20:22.171 "trtype": "TCP", 00:20:22.171 "adrfam": "IPv4", 00:20:22.171 "traddr": "10.0.0.2", 00:20:22.171 "trsvcid": "4420" 00:20:22.171 }, 00:20:22.171 "secure_channel": true 00:20:22.171 } 00:20:22.171 } 00:20:22.171 ] 00:20:22.171 } 00:20:22.171 ] 00:20:22.171 }' 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3547340 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3547340 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3547340 ']' 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.431 [2024-12-09 11:35:14.336301] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:22.431 [2024-12-09 11:35:14.336353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.431 [2024-12-09 11:35:14.428724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.431 [2024-12-09 11:35:14.457847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.431 [2024-12-09 11:35:14.457877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.431 [2024-12-09 11:35:14.457884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.431 [2024-12-09 11:35:14.457889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.431 [2024-12-09 11:35:14.457893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.431 [2024-12-09 11:35:14.458397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.692 [2024-12-09 11:35:14.652565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.692 [2024-12-09 11:35:14.684589] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.692 [2024-12-09 11:35:14.684783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3547637 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3547637 /var/tmp/bdevperf.sock 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3547637 ']' 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.262 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:23.262 "subsystems": [ 00:20:23.262 { 00:20:23.262 "subsystem": "keyring", 00:20:23.262 "config": [ 00:20:23.262 { 00:20:23.262 "method": "keyring_file_add_key", 00:20:23.262 "params": { 00:20:23.262 "name": "key0", 00:20:23.262 "path": "/tmp/tmp.GONscGFP1k" 00:20:23.262 } 00:20:23.262 } 00:20:23.262 ] 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "subsystem": "iobuf", 00:20:23.262 "config": [ 00:20:23.262 { 00:20:23.262 "method": "iobuf_set_options", 00:20:23.262 "params": { 00:20:23.262 "small_pool_count": 8192, 00:20:23.262 "large_pool_count": 1024, 00:20:23.262 "small_bufsize": 8192, 00:20:23.262 "large_bufsize": 135168, 00:20:23.262 "enable_numa": false 00:20:23.262 } 00:20:23.262 } 00:20:23.262 ] 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "subsystem": "sock", 00:20:23.262 "config": [ 00:20:23.262 { 00:20:23.262 "method": "sock_set_default_impl", 00:20:23.262 "params": { 00:20:23.262 "impl_name": "posix" 00:20:23.262 } 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "method": "sock_impl_set_options", 00:20:23.262 "params": { 00:20:23.262 "impl_name": "ssl", 00:20:23.262 "recv_buf_size": 4096, 00:20:23.262 "send_buf_size": 4096, 00:20:23.262 "enable_recv_pipe": true, 00:20:23.262 "enable_quickack": false, 00:20:23.262 "enable_placement_id": 0, 00:20:23.262 "enable_zerocopy_send_server": true, 00:20:23.262 "enable_zerocopy_send_client": false, 00:20:23.262 "zerocopy_threshold": 0, 00:20:23.262 "tls_version": 0, 00:20:23.262 "enable_ktls": false 00:20:23.262 } 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "method": "sock_impl_set_options", 00:20:23.262 "params": { 00:20:23.262 "impl_name": "posix", 00:20:23.262 "recv_buf_size": 2097152, 00:20:23.262 "send_buf_size": 2097152, 00:20:23.262 "enable_recv_pipe": true, 00:20:23.262 "enable_quickack": false, 00:20:23.262 "enable_placement_id": 0, 00:20:23.262 "enable_zerocopy_send_server": true, 00:20:23.262 "enable_zerocopy_send_client": false, 00:20:23.262 "zerocopy_threshold": 0, 00:20:23.262 "tls_version": 0, 00:20:23.262 "enable_ktls": false 00:20:23.262 } 00:20:23.262 } 00:20:23.262 ] 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "subsystem": "vmd", 00:20:23.262 "config": [] 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "subsystem": "accel", 00:20:23.262 "config": [ 00:20:23.262 { 00:20:23.262 "method": "accel_set_options", 00:20:23.262 "params": { 00:20:23.262 "small_cache_size": 128, 00:20:23.262 "large_cache_size": 16, 00:20:23.262 "task_count": 2048, 00:20:23.262 "sequence_count": 2048, 00:20:23.262 "buf_count": 2048 00:20:23.262 } 00:20:23.262 } 00:20:23.262 ] 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "subsystem": "bdev", 00:20:23.262 "config": [ 00:20:23.262 { 00:20:23.262 "method": "bdev_set_options", 00:20:23.262 "params": { 00:20:23.262 "bdev_io_pool_size": 65535, 00:20:23.262 "bdev_io_cache_size": 256, 00:20:23.262 "bdev_auto_examine": true, 00:20:23.262 "iobuf_small_cache_size": 128, 00:20:23.262 "iobuf_large_cache_size": 16 00:20:23.262 } 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "method": "bdev_raid_set_options", 00:20:23.262 "params": { 00:20:23.262 "process_window_size_kb": 1024, 00:20:23.262 "process_max_bandwidth_mb_sec": 0 00:20:23.262 } 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "method": "bdev_iscsi_set_options", 00:20:23.262 "params": { 00:20:23.262 "timeout_sec": 30 00:20:23.262 } 00:20:23.262 }, 00:20:23.262 { 00:20:23.262 "method": "bdev_nvme_set_options", 00:20:23.262 "params": { 00:20:23.262 "action_on_timeout": "none", 00:20:23.262 "timeout_us": 0, 00:20:23.262 "timeout_admin_us": 0, 00:20:23.262 "keep_alive_timeout_ms": 10000, 00:20:23.262 "arbitration_burst": 0, 00:20:23.262 "low_priority_weight": 0, 00:20:23.262 "medium_priority_weight": 0, 00:20:23.262 "high_priority_weight": 0, 00:20:23.263 "nvme_adminq_poll_period_us": 10000, 00:20:23.263 "nvme_ioq_poll_period_us": 0, 00:20:23.263 "io_queue_requests": 512, 00:20:23.263 "delay_cmd_submit": true, 00:20:23.263 "transport_retry_count": 4, 00:20:23.263 "bdev_retry_count": 3, 00:20:23.263 "transport_ack_timeout": 0, 00:20:23.263 "ctrlr_loss_timeout_sec": 0, 00:20:23.263 "reconnect_delay_sec": 0, 00:20:23.263 "fast_io_fail_timeout_sec": 0, 00:20:23.263 "disable_auto_failback": false, 00:20:23.263 "generate_uuids": false, 00:20:23.263 "transport_tos": 0, 00:20:23.263 "nvme_error_stat": false, 00:20:23.263 "rdma_srq_size": 0, 00:20:23.263 "io_path_stat": false, 00:20:23.263 "allow_accel_sequence": false, 00:20:23.263 "rdma_max_cq_size": 0, 00:20:23.263 "rdma_cm_event_timeout_ms": 0, 00:20:23.263 "dhchap_digests": [ 00:20:23.263 "sha256", 00:20:23.263 "sha384", 00:20:23.263 "sha512" 00:20:23.263 ], 00:20:23.263 "dhchap_dhgroups": [ 00:20:23.263 "null", 00:20:23.263 "ffdhe2048", 00:20:23.263 "ffdhe3072", 00:20:23.263 "ffdhe4096", 00:20:23.263 "ffdhe6144", 00:20:23.263 "ffdhe8192" 00:20:23.263 ] 00:20:23.263 } 00:20:23.263 }, 00:20:23.263 { 00:20:23.263 "method": "bdev_nvme_attach_controller", 00:20:23.263 "params": { 00:20:23.263 "name": "TLSTEST", 00:20:23.263 "trtype": "TCP", 00:20:23.263 "adrfam": "IPv4", 00:20:23.263 "traddr": "10.0.0.2", 00:20:23.263 "trsvcid": "4420", 00:20:23.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.263 "prchk_reftag": false, 00:20:23.263 "prchk_guard": false, 00:20:23.263 "ctrlr_loss_timeout_sec": 0, 00:20:23.263 "reconnect_delay_sec": 0, 00:20:23.263 "fast_io_fail_timeout_sec": 0, 00:20:23.263 "psk": "key0", 00:20:23.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.263 "hdgst": false, 00:20:23.263 "ddgst": false, 00:20:23.263 "multipath": "multipath" 00:20:23.263 } 00:20:23.263 }, 00:20:23.263 { 00:20:23.263 "method": "bdev_nvme_set_hotplug", 00:20:23.263 "params": { 00:20:23.263 "period_us": 100000, 00:20:23.263 "enable": false 00:20:23.263 } 00:20:23.263 }, 00:20:23.263 { 00:20:23.263 "method": "bdev_wait_for_examine" 00:20:23.263 } 00:20:23.263 ] 00:20:23.263 }, 00:20:23.263 { 00:20:23.263 "subsystem": "nbd", 00:20:23.263 "config": [] 00:20:23.263 } 00:20:23.263 ] 00:20:23.263 }' 00:20:23.263 [2024-12-09 11:35:15.211971] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:23.263 [2024-12-09 11:35:15.212030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547637 ] 00:20:23.263 [2024-12-09 11:35:15.271006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.263 [2024-12-09 11:35:15.300328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.523 [2024-12-09 11:35:15.435187] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.092 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.092 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.092 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.092 Running I/O for 10 seconds... 00:20:26.419 5407.00 IOPS, 21.12 MiB/s [2024-12-09T10:35:19.521Z] 5202.00 IOPS, 20.32 MiB/s [2024-12-09T10:35:20.463Z] 5122.67 IOPS, 20.01 MiB/s [2024-12-09T10:35:21.405Z] 5085.25 IOPS, 19.86 MiB/s [2024-12-09T10:35:22.346Z] 5028.00 IOPS, 19.64 MiB/s [2024-12-09T10:35:23.288Z] 5010.17 IOPS, 19.57 MiB/s [2024-12-09T10:35:24.230Z] 4973.43 IOPS, 19.43 MiB/s [2024-12-09T10:35:25.173Z] 4964.12 IOPS, 19.39 MiB/s [2024-12-09T10:35:26.556Z] 4941.44 IOPS, 19.30 MiB/s [2024-12-09T10:35:26.557Z] 4932.30 IOPS, 19.27 MiB/s 00:20:34.395 Latency(us) 00:20:34.395 [2024-12-09T10:35:26.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.395 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.395 Verification LBA range: start 0x0 length 0x2000 00:20:34.395 TLSTESTn1 : 10.02 4933.63 19.27 0.00 0.00 25901.04 4724.05 27088.21 00:20:34.395 [2024-12-09T10:35:26.557Z] =================================================================================================================== 00:20:34.395 [2024-12-09T10:35:26.557Z] Total : 4933.63 19.27 0.00 0.00 25901.04 4724.05 27088.21 00:20:34.395 { 00:20:34.395 "results": [ 00:20:34.395 { 00:20:34.395 "job": "TLSTESTn1", 00:20:34.395 "core_mask": "0x4", 00:20:34.395 "workload": "verify", 00:20:34.395 "status": "finished", 00:20:34.395 "verify_range": { 00:20:34.395 "start": 0, 00:20:34.395 "length": 8192 00:20:34.395 }, 00:20:34.395 "queue_depth": 128, 00:20:34.395 "io_size": 4096, 00:20:34.395 "runtime": 10.023047, 00:20:34.395 "iops": 4933.629464173919, 00:20:34.395 "mibps": 19.27199009442937, 00:20:34.395 "io_failed": 0, 00:20:34.395 "io_timeout": 0, 00:20:34.395 "avg_latency_us": 25901.040400134814, 00:20:34.395 "min_latency_us": 4724.053333333333, 00:20:34.395 "max_latency_us": 27088.213333333333 00:20:34.395 } 00:20:34.395 ], 00:20:34.395 "core_count": 1 00:20:34.395 } 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3547637 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3547637 ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3547637 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547637 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547637' 00:20:34.395 killing process with pid 3547637 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3547637 00:20:34.395 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.395 00:20:34.395 Latency(us) 00:20:34.395 [2024-12-09T10:35:26.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.395 [2024-12-09T10:35:26.557Z] =================================================================================================================== 00:20:34.395 [2024-12-09T10:35:26.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3547637 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3547340 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3547340 ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3547340 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547340 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547340' 00:20:34.395 killing process with pid 3547340 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3547340 00:20:34.395 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3547340 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3549785 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3549785 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3549785 ']' 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.656 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.656 [2024-12-09 11:35:26.635006] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:34.656 [2024-12-09 11:35:26.635065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.656 [2024-12-09 11:35:26.712581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.656 [2024-12-09 11:35:26.746687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.656 [2024-12-09 11:35:26.746723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.656 [2024-12-09 11:35:26.746731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.656 [2024-12-09 11:35:26.746738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.656 [2024-12-09 11:35:26.746744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.656 [2024-12-09 11:35:26.747311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.GONscGFP1k 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GONscGFP1k 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.595 [2024-12-09 11:35:27.613144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.595 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.856 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.856 [2024-12-09 11:35:27.970026] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.856 [2024-12-09 11:35:27.970239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.856 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.116 malloc0 00:20:36.116 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:36.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3550340 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3550340 /var/tmp/bdevperf.sock 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3550340 ']' 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.638 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.638 [2024-12-09 11:35:28.746030] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:36.638 [2024-12-09 11:35:28.746084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550340 ] 00:20:36.899 [2024-12-09 11:35:28.829173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.899 [2024-12-09 11:35:28.859279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.899 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.899 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.899 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:37.160 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.160 [2024-12-09 11:35:29.246808] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.421 nvme0n1 00:20:37.421 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.421 Running I/O for 1 seconds... 00:20:38.362 3871.00 IOPS, 15.12 MiB/s 00:20:38.362 Latency(us) 00:20:38.362 [2024-12-09T10:35:30.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.362 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:38.362 Verification LBA range: start 0x0 length 0x2000 00:20:38.362 nvme0n1 : 1.02 3923.53 15.33 0.00 0.00 32316.93 6362.45 83886.08 00:20:38.362 [2024-12-09T10:35:30.524Z] =================================================================================================================== 00:20:38.362 [2024-12-09T10:35:30.524Z] Total : 3923.53 15.33 0.00 0.00 32316.93 6362.45 83886.08 00:20:38.362 { 00:20:38.362 "results": [ 00:20:38.362 { 00:20:38.362 "job": "nvme0n1", 00:20:38.362 "core_mask": "0x2", 00:20:38.362 "workload": "verify", 00:20:38.362 "status": "finished", 00:20:38.362 "verify_range": { 00:20:38.362 "start": 0, 00:20:38.362 "length": 8192 00:20:38.362 }, 00:20:38.362 "queue_depth": 128, 00:20:38.362 "io_size": 4096, 00:20:38.362 "runtime": 1.019236, 00:20:38.362 "iops": 3923.527033974467, 00:20:38.362 "mibps": 15.326277476462762, 00:20:38.362 "io_failed": 0, 00:20:38.362 "io_timeout": 0, 00:20:38.362 "avg_latency_us": 32316.927871967993, 00:20:38.362 "min_latency_us": 6362.453333333333, 00:20:38.362 "max_latency_us": 83886.08 00:20:38.362 } 00:20:38.362 ], 00:20:38.362 "core_count": 1 00:20:38.362 } 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3550340 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3550340 ']' 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3550340 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.362 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550340 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550340' 00:20:38.624 killing process with pid 3550340 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3550340 00:20:38.624 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.624 00:20:38.624 Latency(us) 00:20:38.624 [2024-12-09T10:35:30.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.624 [2024-12-09T10:35:30.786Z] =================================================================================================================== 00:20:38.624 [2024-12-09T10:35:30.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3550340 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3549785 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3549785 ']' 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3549785 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549785 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549785' 00:20:38.624 killing process with pid 3549785 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3549785 00:20:38.624 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3549785 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3550700 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3550700 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3550700 ']' 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.885 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.885 [2024-12-09 11:35:30.875840] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:38.885 [2024-12-09 11:35:30.875893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.885 [2024-12-09 11:35:30.952939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.885 [2024-12-09 11:35:30.987187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.885 [2024-12-09 11:35:30.987222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.885 [2024-12-09 11:35:30.987230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.885 [2024-12-09 11:35:30.987237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.885 [2024-12-09 11:35:30.987243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.885 [2024-12-09 11:35:30.987805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.146 [2024-12-09 11:35:31.115952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.146 malloc0 00:20:39.146 [2024-12-09 11:35:31.142638] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.146 [2024-12-09 11:35:31.142871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3550723 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3550723 /var/tmp/bdevperf.sock 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3550723 ']' 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.146 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.146 [2024-12-09 11:35:31.223177] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:39.146 [2024-12-09 11:35:31.223223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550723 ] 00:20:39.146 [2024-12-09 11:35:31.305658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.407 [2024-12-09 11:35:31.335403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.981 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.981 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.981 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GONscGFP1k 00:20:40.242 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:40.242 [2024-12-09 11:35:32.304634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.242 nvme0n1 00:20:40.502 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.502 Running I/O for 1 seconds... 00:20:41.443 4171.00 IOPS, 16.29 MiB/s 00:20:41.443 Latency(us) 00:20:41.443 [2024-12-09T10:35:33.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.443 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.443 Verification LBA range: start 0x0 length 0x2000 00:20:41.443 nvme0n1 : 1.03 4159.31 16.25 0.00 0.00 30546.72 4560.21 72963.41 00:20:41.443 [2024-12-09T10:35:33.605Z] =================================================================================================================== 00:20:41.443 [2024-12-09T10:35:33.605Z] Total : 4159.31 16.25 0.00 0.00 30546.72 4560.21 72963.41 00:20:41.443 { 00:20:41.443 "results": [ 00:20:41.443 { 00:20:41.443 "job": "nvme0n1", 00:20:41.443 "core_mask": "0x2", 00:20:41.443 "workload": "verify", 00:20:41.443 "status": "finished", 00:20:41.443 "verify_range": { 00:20:41.443 "start": 0, 00:20:41.443 "length": 8192 00:20:41.443 }, 00:20:41.443 "queue_depth": 128, 00:20:41.443 "io_size": 4096, 00:20:41.443 "runtime": 1.033584, 00:20:41.443 "iops": 4159.313611665815, 00:20:41.443 "mibps": 16.24731879556959, 00:20:41.443 "io_failed": 0, 00:20:41.443 "io_timeout": 0, 00:20:41.443 "avg_latency_us": 30546.7243048771, 00:20:41.443 "min_latency_us": 4560.213333333333, 00:20:41.443 "max_latency_us": 72963.41333333333 00:20:41.443 } 00:20:41.443 ], 00:20:41.443 "core_count": 1 00:20:41.443 } 00:20:41.443 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:41.443 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.443 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.704 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.704 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:41.704 "subsystems": [ 00:20:41.704 { 00:20:41.704 "subsystem": "keyring", 00:20:41.704 "config": [ 00:20:41.704 { 00:20:41.704 "method": "keyring_file_add_key", 00:20:41.704 "params": { 00:20:41.704 "name": "key0", 00:20:41.704 "path": "/tmp/tmp.GONscGFP1k" 00:20:41.704 } 00:20:41.704 } 00:20:41.704 ] 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "subsystem": "iobuf", 00:20:41.704 "config": [ 00:20:41.704 { 00:20:41.704 "method": "iobuf_set_options", 00:20:41.704 "params": { 00:20:41.704 "small_pool_count": 8192, 00:20:41.704 "large_pool_count": 1024, 00:20:41.704 "small_bufsize": 8192, 00:20:41.704 "large_bufsize": 135168, 00:20:41.704 "enable_numa": false 00:20:41.704 } 00:20:41.704 } 00:20:41.704 ] 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "subsystem": "sock", 00:20:41.704 "config": [ 00:20:41.704 { 00:20:41.704 "method": "sock_set_default_impl", 00:20:41.704 "params": { 00:20:41.704 "impl_name": "posix" 00:20:41.704 } 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "method": "sock_impl_set_options", 00:20:41.704 "params": { 00:20:41.704 "impl_name": "ssl", 00:20:41.704 "recv_buf_size": 4096, 00:20:41.704 "send_buf_size": 4096, 00:20:41.704 "enable_recv_pipe": true, 00:20:41.704 "enable_quickack": false, 00:20:41.704 "enable_placement_id": 0, 00:20:41.704 "enable_zerocopy_send_server": true, 00:20:41.704 "enable_zerocopy_send_client": false, 00:20:41.704 "zerocopy_threshold": 0, 00:20:41.704 "tls_version": 0, 00:20:41.704 "enable_ktls": false 00:20:41.704 } 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "method": "sock_impl_set_options", 00:20:41.704 "params": { 00:20:41.704 "impl_name": "posix", 00:20:41.704 "recv_buf_size": 2097152, 00:20:41.704 "send_buf_size": 2097152, 00:20:41.704 "enable_recv_pipe": true, 00:20:41.704 "enable_quickack": false, 00:20:41.704 "enable_placement_id": 0, 00:20:41.704 "enable_zerocopy_send_server": true, 00:20:41.704 "enable_zerocopy_send_client": false, 00:20:41.704 "zerocopy_threshold": 0, 00:20:41.704 "tls_version": 0, 00:20:41.704 "enable_ktls": false 00:20:41.704 } 00:20:41.704 } 00:20:41.704 ] 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "subsystem": "vmd", 00:20:41.704 "config": [] 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "subsystem": "accel", 00:20:41.704 "config": [ 00:20:41.704 { 00:20:41.704 "method": "accel_set_options", 00:20:41.704 "params": { 00:20:41.704 "small_cache_size": 128, 00:20:41.704 "large_cache_size": 16, 00:20:41.704 "task_count": 2048, 00:20:41.704 "sequence_count": 2048, 00:20:41.704 "buf_count": 2048 00:20:41.704 } 00:20:41.704 } 00:20:41.704 ] 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "subsystem": "bdev", 00:20:41.704 "config": [ 00:20:41.704 { 00:20:41.704 "method": "bdev_set_options", 00:20:41.704 "params": { 00:20:41.704 "bdev_io_pool_size": 65535, 00:20:41.704 "bdev_io_cache_size": 256, 00:20:41.704 "bdev_auto_examine": true, 00:20:41.704 "iobuf_small_cache_size": 128, 00:20:41.704 "iobuf_large_cache_size": 16 00:20:41.704 } 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "method": "bdev_raid_set_options", 00:20:41.704 "params": { 00:20:41.704 "process_window_size_kb": 1024, 00:20:41.704 "process_max_bandwidth_mb_sec": 0 00:20:41.704 } 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "method": "bdev_iscsi_set_options", 00:20:41.704 "params": { 00:20:41.704 "timeout_sec": 30 00:20:41.704 } 00:20:41.704 }, 00:20:41.704 { 00:20:41.704 "method": "bdev_nvme_set_options", 00:20:41.704 "params": { 00:20:41.704 "action_on_timeout": "none", 00:20:41.704 "timeout_us": 0, 00:20:41.704 "timeout_admin_us": 0, 00:20:41.704 "keep_alive_timeout_ms": 10000, 00:20:41.704 "arbitration_burst": 0, 00:20:41.704 "low_priority_weight": 0, 00:20:41.704 "medium_priority_weight": 0, 00:20:41.704 "high_priority_weight": 0, 00:20:41.704 "nvme_adminq_poll_period_us": 10000, 00:20:41.704 "nvme_ioq_poll_period_us": 0, 00:20:41.704 "io_queue_requests": 0, 00:20:41.704 "delay_cmd_submit": true, 00:20:41.704 "transport_retry_count": 4, 00:20:41.704 "bdev_retry_count": 3, 00:20:41.704 "transport_ack_timeout": 0, 00:20:41.704 "ctrlr_loss_timeout_sec": 0, 00:20:41.704 "reconnect_delay_sec": 0, 00:20:41.704 "fast_io_fail_timeout_sec": 0, 00:20:41.704 "disable_auto_failback": false, 00:20:41.704 "generate_uuids": false, 00:20:41.704 "transport_tos": 0, 00:20:41.705 "nvme_error_stat": false, 00:20:41.705 "rdma_srq_size": 0, 00:20:41.705 "io_path_stat": false, 00:20:41.705 "allow_accel_sequence": false, 00:20:41.705 "rdma_max_cq_size": 0, 00:20:41.705 "rdma_cm_event_timeout_ms": 0, 00:20:41.705 "dhchap_digests": [ 00:20:41.705 "sha256", 00:20:41.705 "sha384", 00:20:41.705 "sha512" 00:20:41.705 ], 00:20:41.705 "dhchap_dhgroups": [ 00:20:41.705 "null", 00:20:41.705 "ffdhe2048", 00:20:41.705 "ffdhe3072", 00:20:41.705 "ffdhe4096", 00:20:41.705 "ffdhe6144", 00:20:41.705 "ffdhe8192" 00:20:41.705 ] 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "bdev_nvme_set_hotplug", 00:20:41.705 "params": { 00:20:41.705 "period_us": 100000, 00:20:41.705 "enable": false 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "bdev_malloc_create", 00:20:41.705 "params": { 00:20:41.705 "name": "malloc0", 00:20:41.705 "num_blocks": 8192, 00:20:41.705 "block_size": 4096, 00:20:41.705 "physical_block_size": 4096, 00:20:41.705 "uuid": "9b9cc44c-7bb5-434a-bf21-2f32fdecb6bb", 00:20:41.705 "optimal_io_boundary": 0, 00:20:41.705 "md_size": 0, 00:20:41.705 "dif_type": 0, 00:20:41.705 "dif_is_head_of_md": false, 00:20:41.705 "dif_pi_format": 0 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "bdev_wait_for_examine" 00:20:41.705 } 00:20:41.705 ] 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "subsystem": "nbd", 00:20:41.705 "config": [] 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "subsystem": "scheduler", 00:20:41.705 "config": [ 00:20:41.705 { 00:20:41.705 "method": "framework_set_scheduler", 00:20:41.705 "params": { 00:20:41.705 "name": "static" 00:20:41.705 } 00:20:41.705 } 00:20:41.705 ] 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "subsystem": "nvmf", 00:20:41.705 "config": [ 00:20:41.705 { 00:20:41.705 "method": "nvmf_set_config", 00:20:41.705 "params": { 00:20:41.705 "discovery_filter": "match_any", 00:20:41.705 "admin_cmd_passthru": { 00:20:41.705 "identify_ctrlr": false 00:20:41.705 }, 00:20:41.705 "dhchap_digests": [ 00:20:41.705 "sha256", 00:20:41.705 "sha384", 00:20:41.705 "sha512" 00:20:41.705 ], 00:20:41.705 "dhchap_dhgroups": [ 00:20:41.705 "null", 00:20:41.705 "ffdhe2048", 00:20:41.705 "ffdhe3072", 00:20:41.705 "ffdhe4096", 00:20:41.705 "ffdhe6144", 00:20:41.705 "ffdhe8192" 00:20:41.705 ] 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_set_max_subsystems", 00:20:41.705 "params": { 00:20:41.705 "max_subsystems": 1024 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_set_crdt", 00:20:41.705 "params": { 00:20:41.705 "crdt1": 0, 00:20:41.705 "crdt2": 0, 00:20:41.705 "crdt3": 0 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_create_transport", 00:20:41.705 "params": { 00:20:41.705 "trtype": "TCP", 00:20:41.705 "max_queue_depth": 128, 00:20:41.705 "max_io_qpairs_per_ctrlr": 127, 00:20:41.705 "in_capsule_data_size": 4096, 00:20:41.705 "max_io_size": 131072, 00:20:41.705 "io_unit_size": 131072, 00:20:41.705 "max_aq_depth": 128, 00:20:41.705 "num_shared_buffers": 511, 00:20:41.705 "buf_cache_size": 4294967295, 00:20:41.705 "dif_insert_or_strip": false, 00:20:41.705 "zcopy": false, 00:20:41.705 "c2h_success": false, 00:20:41.705 "sock_priority": 0, 00:20:41.705 "abort_timeout_sec": 1, 00:20:41.705 "ack_timeout": 0, 00:20:41.705 "data_wr_pool_size": 0 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_create_subsystem", 00:20:41.705 "params": { 00:20:41.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.705 "allow_any_host": false, 00:20:41.705 "serial_number": "00000000000000000000", 00:20:41.705 "model_number": "SPDK bdev Controller", 00:20:41.705 "max_namespaces": 32, 00:20:41.705 "min_cntlid": 1, 00:20:41.705 "max_cntlid": 65519, 00:20:41.705 "ana_reporting": false 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_subsystem_add_host", 00:20:41.705 "params": { 00:20:41.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.705 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.705 "psk": "key0" 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_subsystem_add_ns", 00:20:41.705 "params": { 00:20:41.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.705 "namespace": { 00:20:41.705 "nsid": 1, 00:20:41.705 "bdev_name": "malloc0", 00:20:41.705 "nguid": "9B9CC44C7BB5434ABF212F32FDECB6BB", 00:20:41.705 "uuid": "9b9cc44c-7bb5-434a-bf21-2f32fdecb6bb", 00:20:41.705 "no_auto_visible": false 00:20:41.705 } 00:20:41.705 } 00:20:41.705 }, 00:20:41.705 { 00:20:41.705 "method": "nvmf_subsystem_add_listener", 00:20:41.705 "params": { 00:20:41.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.705 "listen_address": { 00:20:41.705 "trtype": "TCP", 00:20:41.705 "adrfam": "IPv4", 00:20:41.705 "traddr": "10.0.0.2", 00:20:41.705 "trsvcid": "4420" 00:20:41.705 }, 00:20:41.705 "secure_channel": false, 00:20:41.705 "sock_impl": "ssl" 00:20:41.705 } 00:20:41.705 } 00:20:41.705 ] 00:20:41.705 } 00:20:41.705 ] 00:20:41.705 }' 00:20:41.705 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.966 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:41.966 "subsystems": [ 00:20:41.966 { 00:20:41.966 "subsystem": "keyring", 00:20:41.966 "config": [ 00:20:41.966 { 00:20:41.966 "method": "keyring_file_add_key", 00:20:41.966 "params": { 00:20:41.966 "name": "key0", 00:20:41.966 "path": "/tmp/tmp.GONscGFP1k" 00:20:41.966 } 00:20:41.966 } 00:20:41.966 ] 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "subsystem": "iobuf", 00:20:41.966 "config": [ 00:20:41.966 { 00:20:41.966 "method": "iobuf_set_options", 00:20:41.966 "params": { 00:20:41.966 "small_pool_count": 8192, 00:20:41.966 "large_pool_count": 1024, 00:20:41.966 "small_bufsize": 8192, 00:20:41.966 "large_bufsize": 135168, 00:20:41.966 "enable_numa": false 00:20:41.966 } 00:20:41.966 } 00:20:41.966 ] 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "subsystem": "sock", 00:20:41.966 "config": [ 00:20:41.966 { 00:20:41.966 "method": "sock_set_default_impl", 00:20:41.966 "params": { 00:20:41.966 "impl_name": "posix" 00:20:41.966 } 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "method": "sock_impl_set_options", 00:20:41.966 "params": { 00:20:41.966 "impl_name": "ssl", 00:20:41.966 "recv_buf_size": 4096, 00:20:41.966 "send_buf_size": 4096, 00:20:41.966 "enable_recv_pipe": true, 00:20:41.966 "enable_quickack": false, 00:20:41.966 "enable_placement_id": 0, 00:20:41.966 "enable_zerocopy_send_server": true, 00:20:41.966 "enable_zerocopy_send_client": false, 00:20:41.966 "zerocopy_threshold": 0, 00:20:41.966 "tls_version": 0, 00:20:41.966 "enable_ktls": false 00:20:41.966 } 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "method": "sock_impl_set_options", 00:20:41.966 "params": { 00:20:41.966 "impl_name": "posix", 00:20:41.966 "recv_buf_size": 2097152, 00:20:41.966 "send_buf_size": 2097152, 00:20:41.966 "enable_recv_pipe": true, 00:20:41.966 "enable_quickack": false, 00:20:41.966 "enable_placement_id": 0, 00:20:41.966 "enable_zerocopy_send_server": true, 00:20:41.966 "enable_zerocopy_send_client": false, 00:20:41.966 "zerocopy_threshold": 0, 00:20:41.966 "tls_version": 0, 00:20:41.966 "enable_ktls": false 00:20:41.966 } 00:20:41.966 } 00:20:41.966 ] 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "subsystem": "vmd", 00:20:41.966 "config": [] 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "subsystem": "accel", 00:20:41.966 "config": [ 00:20:41.966 { 00:20:41.966 "method": "accel_set_options", 00:20:41.966 "params": { 00:20:41.966 "small_cache_size": 128, 00:20:41.966 "large_cache_size": 16, 00:20:41.966 "task_count": 2048, 00:20:41.966 "sequence_count": 2048, 00:20:41.966 "buf_count": 2048 00:20:41.966 } 00:20:41.966 } 00:20:41.966 ] 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "subsystem": "bdev", 00:20:41.966 "config": [ 00:20:41.966 { 00:20:41.966 "method": "bdev_set_options", 00:20:41.966 "params": { 00:20:41.966 "bdev_io_pool_size": 65535, 00:20:41.966 "bdev_io_cache_size": 256, 00:20:41.966 "bdev_auto_examine": true, 00:20:41.966 "iobuf_small_cache_size": 128, 00:20:41.966 "iobuf_large_cache_size": 16 00:20:41.966 } 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "method": "bdev_raid_set_options", 00:20:41.966 "params": { 00:20:41.966 "process_window_size_kb": 1024, 00:20:41.966 "process_max_bandwidth_mb_sec": 0 00:20:41.966 } 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "method": "bdev_iscsi_set_options", 00:20:41.966 "params": { 00:20:41.966 "timeout_sec": 30 00:20:41.966 } 00:20:41.966 }, 00:20:41.966 { 00:20:41.966 "method": "bdev_nvme_set_options", 00:20:41.966 "params": { 00:20:41.966 "action_on_timeout": "none", 00:20:41.966 "timeout_us": 0, 00:20:41.966 "timeout_admin_us": 0, 00:20:41.966 "keep_alive_timeout_ms": 10000, 00:20:41.966 "arbitration_burst": 0, 00:20:41.966 "low_priority_weight": 0, 00:20:41.966 "medium_priority_weight": 0, 00:20:41.966 "high_priority_weight": 0, 00:20:41.966 "nvme_adminq_poll_period_us": 10000, 00:20:41.966 "nvme_ioq_poll_period_us": 0, 00:20:41.966 "io_queue_requests": 512, 00:20:41.966 "delay_cmd_submit": true, 00:20:41.966 "transport_retry_count": 4, 00:20:41.966 "bdev_retry_count": 3, 00:20:41.966 "transport_ack_timeout": 0, 00:20:41.966 "ctrlr_loss_timeout_sec": 0, 00:20:41.966 "reconnect_delay_sec": 0, 00:20:41.966 "fast_io_fail_timeout_sec": 0, 00:20:41.966 "disable_auto_failback": false, 00:20:41.967 "generate_uuids": false, 00:20:41.967 "transport_tos": 0, 00:20:41.967 "nvme_error_stat": false, 00:20:41.967 "rdma_srq_size": 0, 00:20:41.967 "io_path_stat": false, 00:20:41.967 "allow_accel_sequence": false, 00:20:41.967 "rdma_max_cq_size": 0, 00:20:41.967 "rdma_cm_event_timeout_ms": 0, 00:20:41.967 "dhchap_digests": [ 00:20:41.967 "sha256", 00:20:41.967 "sha384", 00:20:41.967 "sha512" 00:20:41.967 ], 00:20:41.967 "dhchap_dhgroups": [ 00:20:41.967 "null", 00:20:41.967 "ffdhe2048", 00:20:41.967 "ffdhe3072", 00:20:41.967 "ffdhe4096", 00:20:41.967 "ffdhe6144", 00:20:41.967 "ffdhe8192" 00:20:41.967 ] 00:20:41.967 } 00:20:41.967 }, 00:20:41.967 { 00:20:41.967 "method": "bdev_nvme_attach_controller", 00:20:41.967 "params": { 00:20:41.967 "name": "nvme0", 00:20:41.967 "trtype": "TCP", 00:20:41.967 "adrfam": "IPv4", 00:20:41.967 "traddr": "10.0.0.2", 00:20:41.967 "trsvcid": "4420", 00:20:41.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.967 "prchk_reftag": false, 00:20:41.967 "prchk_guard": false, 00:20:41.967 "ctrlr_loss_timeout_sec": 0, 00:20:41.967 "reconnect_delay_sec": 0, 00:20:41.967 "fast_io_fail_timeout_sec": 0, 00:20:41.967 "psk": "key0", 00:20:41.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.967 "hdgst": false, 00:20:41.967 "ddgst": false, 00:20:41.967 "multipath": "multipath" 00:20:41.967 } 00:20:41.967 }, 00:20:41.967 { 00:20:41.967 "method": "bdev_nvme_set_hotplug", 00:20:41.967 "params": { 00:20:41.967 "period_us": 100000, 00:20:41.967 "enable": false 00:20:41.967 } 00:20:41.967 }, 00:20:41.967 { 00:20:41.967 "method": "bdev_enable_histogram", 00:20:41.967 "params": { 00:20:41.967 "name": "nvme0n1", 00:20:41.967 "enable": true 00:20:41.967 } 00:20:41.967 }, 00:20:41.967 { 00:20:41.967 "method": "bdev_wait_for_examine" 00:20:41.967 } 00:20:41.967 ] 00:20:41.967 }, 00:20:41.967 { 00:20:41.967 "subsystem": "nbd", 00:20:41.967 "config": [] 00:20:41.967 } 00:20:41.967 ] 00:20:41.967 }' 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3550723 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3550723 ']' 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3550723 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550723 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550723' 00:20:41.967 killing process with pid 3550723 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3550723 00:20:41.967 Received shutdown signal, test time was about 1.000000 seconds 00:20:41.967 00:20:41.967 Latency(us) 00:20:41.967 [2024-12-09T10:35:34.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.967 [2024-12-09T10:35:34.129Z] =================================================================================================================== 00:20:41.967 [2024-12-09T10:35:34.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.967 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3550723 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3550700 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3550700 ']' 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3550700 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.967 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550700 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550700' 00:20:42.229 killing process with pid 3550700 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3550700 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3550700 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.229 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:42.229 "subsystems": [ 00:20:42.229 { 00:20:42.229 "subsystem": "keyring", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "keyring_file_add_key", 00:20:42.229 "params": { 00:20:42.229 "name": "key0", 00:20:42.229 "path": "/tmp/tmp.GONscGFP1k" 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "iobuf", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "iobuf_set_options", 00:20:42.229 "params": { 00:20:42.229 "small_pool_count": 8192, 00:20:42.229 "large_pool_count": 1024, 00:20:42.229 "small_bufsize": 8192, 00:20:42.229 "large_bufsize": 135168, 00:20:42.229 "enable_numa": false 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "sock", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "sock_set_default_impl", 00:20:42.229 "params": { 00:20:42.229 "impl_name": "posix" 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "sock_impl_set_options", 00:20:42.229 "params": { 00:20:42.229 "impl_name": "ssl", 00:20:42.229 "recv_buf_size": 4096, 00:20:42.229 "send_buf_size": 4096, 00:20:42.229 "enable_recv_pipe": true, 00:20:42.229 "enable_quickack": false, 00:20:42.229 "enable_placement_id": 0, 00:20:42.229 "enable_zerocopy_send_server": true, 00:20:42.229 "enable_zerocopy_send_client": false, 00:20:42.229 "zerocopy_threshold": 0, 00:20:42.229 "tls_version": 0, 00:20:42.229 "enable_ktls": false 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "sock_impl_set_options", 00:20:42.229 "params": { 00:20:42.229 "impl_name": "posix", 00:20:42.229 "recv_buf_size": 2097152, 00:20:42.229 "send_buf_size": 2097152, 00:20:42.229 "enable_recv_pipe": true, 00:20:42.229 "enable_quickack": false, 00:20:42.229 "enable_placement_id": 0, 00:20:42.229 "enable_zerocopy_send_server": true, 00:20:42.229 "enable_zerocopy_send_client": false, 00:20:42.229 "zerocopy_threshold": 0, 00:20:42.229 "tls_version": 0, 00:20:42.229 "enable_ktls": false 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "vmd", 00:20:42.229 "config": [] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "accel", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "accel_set_options", 00:20:42.229 "params": { 00:20:42.229 "small_cache_size": 128, 00:20:42.229 "large_cache_size": 16, 00:20:42.229 "task_count": 2048, 00:20:42.229 "sequence_count": 2048, 00:20:42.229 "buf_count": 2048 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "bdev", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "bdev_set_options", 00:20:42.229 "params": { 00:20:42.229 "bdev_io_pool_size": 65535, 00:20:42.229 "bdev_io_cache_size": 256, 00:20:42.229 "bdev_auto_examine": true, 00:20:42.229 "iobuf_small_cache_size": 128, 00:20:42.229 "iobuf_large_cache_size": 16 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_raid_set_options", 00:20:42.229 "params": { 00:20:42.229 "process_window_size_kb": 1024, 00:20:42.229 "process_max_bandwidth_mb_sec": 0 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_iscsi_set_options", 00:20:42.229 "params": { 00:20:42.229 "timeout_sec": 30 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_nvme_set_options", 00:20:42.229 "params": { 00:20:42.229 "action_on_timeout": "none", 00:20:42.229 "timeout_us": 0, 00:20:42.229 "timeout_admin_us": 0, 00:20:42.229 "keep_alive_timeout_ms": 10000, 00:20:42.229 "arbitration_burst": 0, 00:20:42.229 "low_priority_weight": 0, 00:20:42.229 "medium_priority_weight": 0, 00:20:42.229 "high_priority_weight": 0, 00:20:42.229 "nvme_adminq_poll_period_us": 10000, 00:20:42.229 "nvme_ioq_poll_period_us": 0, 00:20:42.229 "io_queue_requests": 0, 00:20:42.229 "delay_cmd_submit": true, 00:20:42.229 "transport_retry_count": 4, 00:20:42.229 "bdev_retry_count": 3, 00:20:42.229 "transport_ack_timeout": 0, 00:20:42.229 "ctrlr_loss_timeout_sec": 0, 00:20:42.229 "reconnect_delay_sec": 0, 00:20:42.229 "fast_io_fail_timeout_sec": 0, 00:20:42.229 "disable_auto_failback": false, 00:20:42.229 "generate_uuids": false, 00:20:42.229 "transport_tos": 0, 00:20:42.229 "nvme_error_stat": false, 00:20:42.229 "rdma_srq_size": 0, 00:20:42.229 "io_path_stat": false, 00:20:42.229 "allow_accel_sequence": false, 00:20:42.229 "rdma_max_cq_size": 0, 00:20:42.229 "rdma_cm_event_timeout_ms": 0, 00:20:42.229 "dhchap_digests": [ 00:20:42.229 "sha256", 00:20:42.229 "sha384", 00:20:42.229 "sha512" 00:20:42.229 ], 00:20:42.229 "dhchap_dhgroups": [ 00:20:42.229 "null", 00:20:42.229 "ffdhe2048", 00:20:42.229 "ffdhe3072", 00:20:42.229 "ffdhe4096", 00:20:42.229 "ffdhe6144", 00:20:42.229 "ffdhe8192" 00:20:42.229 ] 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_nvme_set_hotplug", 00:20:42.229 "params": { 00:20:42.229 "period_us": 100000, 00:20:42.229 "enable": false 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_malloc_create", 00:20:42.229 "params": { 00:20:42.229 "name": "malloc0", 00:20:42.229 "num_blocks": 8192, 00:20:42.229 "block_size": 4096, 00:20:42.229 "physical_block_size": 4096, 00:20:42.229 "uuid": "9b9cc44c-7bb5-434a-bf21-2f32fdecb6bb", 00:20:42.229 "optimal_io_boundary": 0, 00:20:42.229 "md_size": 0, 00:20:42.229 "dif_type": 0, 00:20:42.229 "dif_is_head_of_md": false, 00:20:42.229 "dif_pi_format": 0 00:20:42.229 } 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "method": "bdev_wait_for_examine" 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "nbd", 00:20:42.229 "config": [] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "scheduler", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "framework_set_scheduler", 00:20:42.229 "params": { 00:20:42.229 "name": "static" 00:20:42.229 } 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "subsystem": "nvmf", 00:20:42.229 "config": [ 00:20:42.229 { 00:20:42.229 "method": "nvmf_set_config", 00:20:42.229 "params": { 00:20:42.229 "discovery_filter": "match_any", 00:20:42.229 "admin_cmd_passthru": { 00:20:42.229 "identify_ctrlr": false 00:20:42.229 }, 00:20:42.229 "dhchap_digests": [ 00:20:42.229 "sha256", 00:20:42.229 "sha384", 00:20:42.229 "sha512" 00:20:42.229 ], 00:20:42.229 "dhchap_dhgroups": [ 00:20:42.229 "null", 00:20:42.229 "ffdhe2048", 00:20:42.229 "ffdhe3072", 00:20:42.229 "ffdhe4096", 00:20:42.229 "ffdhe6144", 00:20:42.229 "ffdhe8192" 00:20:42.229 ] 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_set_max_subsystems", 00:20:42.230 "params": { 00:20:42.230 "max_subsystems": 1024 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_set_crdt", 00:20:42.230 "params": { 00:20:42.230 "crdt1": 0, 00:20:42.230 "crdt2": 0, 00:20:42.230 "crdt3": 0 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_create_transport", 00:20:42.230 "params": { 00:20:42.230 "trtype": "TCP", 00:20:42.230 "max_queue_depth": 128, 00:20:42.230 "max_io_qpairs_per_ctrlr": 127, 00:20:42.230 "in_capsule_data_size": 4096, 00:20:42.230 "max_io_size": 131072, 00:20:42.230 "io_unit_size": 131072, 00:20:42.230 "max_aq_depth": 128, 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.230 "num_shared_buffers": 511, 00:20:42.230 "buf_cache_size": 4294967295, 00:20:42.230 "dif_insert_or_strip": false, 00:20:42.230 "zcopy": false, 00:20:42.230 "c2h_success": false, 00:20:42.230 "sock_priority": 0, 00:20:42.230 "abort_timeout_sec": 1, 00:20:42.230 "ack_timeout": 0, 00:20:42.230 "data_wr_pool_size": 0 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_create_subsystem", 00:20:42.230 "params": { 00:20:42.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.230 "allow_any_host": false, 00:20:42.230 "serial_number": "00000000000000000000", 00:20:42.230 "model_number": "SPDK bdev Controller", 00:20:42.230 "max_namespaces": 32, 00:20:42.230 "min_cntlid": 1, 00:20:42.230 "max_cntlid": 65519, 00:20:42.230 "ana_reporting": false 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_subsystem_add_host", 00:20:42.230 "params": { 00:20:42.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.230 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.230 "psk": "key0" 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_subsystem_add_ns", 00:20:42.230 "params": { 00:20:42.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.230 "namespace": { 00:20:42.230 "nsid": 1, 00:20:42.230 "bdev_name": "malloc0", 00:20:42.230 "nguid": "9B9CC44C7BB5434ABF212F32FDECB6BB", 00:20:42.230 "uuid": "9b9cc44c-7bb5-434a-bf21-2f32fdecb6bb", 00:20:42.230 "no_auto_visible": false 00:20:42.230 } 00:20:42.230 } 00:20:42.230 }, 00:20:42.230 { 00:20:42.230 "method": "nvmf_subsystem_add_listener", 00:20:42.230 "params": { 00:20:42.230 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.230 "listen_address": { 00:20:42.230 "trtype": "TCP", 00:20:42.230 "adrfam": "IPv4", 00:20:42.230 "traddr": "10.0.0.2", 00:20:42.230 "trsvcid": "4420" 00:20:42.230 }, 00:20:42.230 "secure_channel": false, 00:20:42.230 "sock_impl": "ssl" 00:20:42.230 } 00:20:42.230 } 00:20:42.230 ] 00:20:42.230 } 00:20:42.230 ] 00:20:42.230 }' 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3551409 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3551409 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3551409 ']' 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.230 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.230 [2024-12-09 11:35:34.338763] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:42.230 [2024-12-09 11:35:34.338822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.491 [2024-12-09 11:35:34.417353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.491 [2024-12-09 11:35:34.452980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.491 [2024-12-09 11:35:34.453017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.491 [2024-12-09 11:35:34.453025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.491 [2024-12-09 11:35:34.453032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.491 [2024-12-09 11:35:34.453038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.491 [2024-12-09 11:35:34.453626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.751 [2024-12-09 11:35:34.653614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.751 [2024-12-09 11:35:34.685626] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.751 [2024-12-09 11:35:34.685844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3551525 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3551525 /var/tmp/bdevperf.sock 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3551525 ']' 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.012 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:43.012 "subsystems": [ 00:20:43.012 { 00:20:43.012 "subsystem": "keyring", 00:20:43.012 "config": [ 00:20:43.012 { 00:20:43.012 "method": "keyring_file_add_key", 00:20:43.012 "params": { 00:20:43.012 "name": "key0", 00:20:43.012 "path": "/tmp/tmp.GONscGFP1k" 00:20:43.012 } 00:20:43.012 } 00:20:43.012 ] 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "subsystem": "iobuf", 00:20:43.012 "config": [ 00:20:43.012 { 00:20:43.012 "method": "iobuf_set_options", 00:20:43.012 "params": { 00:20:43.012 "small_pool_count": 8192, 00:20:43.012 "large_pool_count": 1024, 00:20:43.012 "small_bufsize": 8192, 00:20:43.012 "large_bufsize": 135168, 00:20:43.012 "enable_numa": false 00:20:43.012 } 00:20:43.012 } 00:20:43.012 ] 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "subsystem": "sock", 00:20:43.012 "config": [ 00:20:43.012 { 00:20:43.012 "method": "sock_set_default_impl", 00:20:43.012 "params": { 00:20:43.012 "impl_name": "posix" 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "sock_impl_set_options", 00:20:43.012 "params": { 00:20:43.012 "impl_name": "ssl", 00:20:43.012 "recv_buf_size": 4096, 00:20:43.012 "send_buf_size": 4096, 00:20:43.012 "enable_recv_pipe": true, 00:20:43.012 "enable_quickack": false, 00:20:43.012 "enable_placement_id": 0, 00:20:43.012 "enable_zerocopy_send_server": true, 00:20:43.012 "enable_zerocopy_send_client": false, 00:20:43.012 "zerocopy_threshold": 0, 00:20:43.012 "tls_version": 0, 00:20:43.012 "enable_ktls": false 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "sock_impl_set_options", 00:20:43.012 "params": { 00:20:43.012 "impl_name": "posix", 00:20:43.012 "recv_buf_size": 2097152, 00:20:43.012 "send_buf_size": 2097152, 00:20:43.012 "enable_recv_pipe": true, 00:20:43.012 "enable_quickack": false, 00:20:43.012 "enable_placement_id": 0, 00:20:43.012 "enable_zerocopy_send_server": true, 00:20:43.012 "enable_zerocopy_send_client": false, 00:20:43.012 "zerocopy_threshold": 0, 00:20:43.012 "tls_version": 0, 00:20:43.012 "enable_ktls": false 00:20:43.012 } 00:20:43.012 } 00:20:43.012 ] 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "subsystem": "vmd", 00:20:43.012 "config": [] 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "subsystem": "accel", 00:20:43.012 "config": [ 00:20:43.012 { 00:20:43.012 "method": "accel_set_options", 00:20:43.012 "params": { 00:20:43.012 "small_cache_size": 128, 00:20:43.012 "large_cache_size": 16, 00:20:43.012 "task_count": 2048, 00:20:43.012 "sequence_count": 2048, 00:20:43.012 "buf_count": 2048 00:20:43.012 } 00:20:43.012 } 00:20:43.012 ] 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "subsystem": "bdev", 00:20:43.012 "config": [ 00:20:43.012 { 00:20:43.012 "method": "bdev_set_options", 00:20:43.012 "params": { 00:20:43.012 "bdev_io_pool_size": 65535, 00:20:43.012 "bdev_io_cache_size": 256, 00:20:43.012 "bdev_auto_examine": true, 00:20:43.012 "iobuf_small_cache_size": 128, 00:20:43.012 "iobuf_large_cache_size": 16 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "bdev_raid_set_options", 00:20:43.012 "params": { 00:20:43.012 "process_window_size_kb": 1024, 00:20:43.012 "process_max_bandwidth_mb_sec": 0 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "bdev_iscsi_set_options", 00:20:43.012 "params": { 00:20:43.012 "timeout_sec": 30 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "bdev_nvme_set_options", 00:20:43.012 "params": { 00:20:43.012 "action_on_timeout": "none", 00:20:43.012 "timeout_us": 0, 00:20:43.012 "timeout_admin_us": 0, 00:20:43.012 "keep_alive_timeout_ms": 10000, 00:20:43.012 "arbitration_burst": 0, 00:20:43.012 "low_priority_weight": 0, 00:20:43.012 "medium_priority_weight": 0, 00:20:43.012 "high_priority_weight": 0, 00:20:43.012 "nvme_adminq_poll_period_us": 10000, 00:20:43.012 "nvme_ioq_poll_period_us": 0, 00:20:43.012 "io_queue_requests": 512, 00:20:43.012 "delay_cmd_submit": true, 00:20:43.012 "transport_retry_count": 4, 00:20:43.012 "bdev_retry_count": 3, 00:20:43.012 "transport_ack_timeout": 0, 00:20:43.012 "ctrlr_loss_timeout_sec": 0, 00:20:43.012 "reconnect_delay_sec": 0, 00:20:43.012 "fast_io_fail_timeout_sec": 0, 00:20:43.012 "disable_auto_failback": false, 00:20:43.012 "generate_uuids": false, 00:20:43.012 "transport_tos": 0, 00:20:43.012 "nvme_error_stat": false, 00:20:43.012 "rdma_srq_size": 0, 00:20:43.012 "io_path_stat": false, 00:20:43.012 "allow_accel_sequence": false, 00:20:43.012 "rdma_max_cq_size": 0, 00:20:43.012 "rdma_cm_event_timeout_ms": 0, 00:20:43.012 "dhchap_digests": [ 00:20:43.012 "sha256", 00:20:43.012 "sha384", 00:20:43.012 "sha512" 00:20:43.012 ], 00:20:43.012 "dhchap_dhgroups": [ 00:20:43.012 "null", 00:20:43.012 "ffdhe2048", 00:20:43.012 "ffdhe3072", 00:20:43.012 "ffdhe4096", 00:20:43.012 "ffdhe6144", 00:20:43.012 "ffdhe8192" 00:20:43.012 ] 00:20:43.012 } 00:20:43.012 }, 00:20:43.012 { 00:20:43.012 "method": "bdev_nvme_attach_controller", 00:20:43.012 "params": { 00:20:43.012 "name": "nvme0", 00:20:43.012 "trtype": "TCP", 00:20:43.012 "adrfam": "IPv4", 00:20:43.012 "traddr": "10.0.0.2", 00:20:43.013 "trsvcid": "4420", 00:20:43.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.013 "prchk_reftag": false, 00:20:43.013 "prchk_guard": false, 00:20:43.013 "ctrlr_loss_timeout_sec": 0, 00:20:43.013 "reconnect_delay_sec": 0, 00:20:43.013 "fast_io_fail_timeout_sec": 0, 00:20:43.013 "psk": "key0", 00:20:43.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.013 "hdgst": false, 00:20:43.013 "ddgst": false, 00:20:43.013 "multipath": "multipath" 00:20:43.013 } 00:20:43.013 }, 00:20:43.013 { 00:20:43.013 "method": "bdev_nvme_set_hotplug", 00:20:43.013 "params": { 00:20:43.013 "period_us": 100000, 00:20:43.013 "enable": false 00:20:43.013 } 00:20:43.013 }, 00:20:43.013 { 00:20:43.013 "method": "bdev_enable_histogram", 00:20:43.013 "params": { 00:20:43.013 "name": "nvme0n1", 00:20:43.013 "enable": true 00:20:43.013 } 00:20:43.013 }, 00:20:43.013 { 00:20:43.013 "method": "bdev_wait_for_examine" 00:20:43.013 } 00:20:43.013 ] 00:20:43.013 }, 00:20:43.013 { 00:20:43.013 "subsystem": "nbd", 00:20:43.013 "config": [] 00:20:43.013 } 00:20:43.013 ] 00:20:43.013 }' 00:20:43.272 [2024-12-09 11:35:35.210009] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:43.272 [2024-12-09 11:35:35.210067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551525 ] 00:20:43.272 [2024-12-09 11:35:35.296503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.272 [2024-12-09 11:35:35.327327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.532 [2024-12-09 11:35:35.463610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.102 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.362 Running I/O for 1 seconds... 00:20:45.301 4881.00 IOPS, 19.07 MiB/s 00:20:45.301 Latency(us) 00:20:45.301 [2024-12-09T10:35:37.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.301 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.301 Verification LBA range: start 0x0 length 0x2000 00:20:45.301 nvme0n1 : 1.02 4925.65 19.24 0.00 0.00 25771.50 4532.91 72089.60 00:20:45.301 [2024-12-09T10:35:37.463Z] =================================================================================================================== 00:20:45.301 [2024-12-09T10:35:37.463Z] Total : 4925.65 19.24 0.00 0.00 25771.50 4532.91 72089.60 00:20:45.301 { 00:20:45.301 "results": [ 00:20:45.301 { 00:20:45.301 "job": "nvme0n1", 00:20:45.301 "core_mask": "0x2", 00:20:45.301 "workload": "verify", 00:20:45.301 "status": "finished", 00:20:45.301 "verify_range": { 00:20:45.301 "start": 0, 00:20:45.301 "length": 8192 00:20:45.301 }, 00:20:45.301 "queue_depth": 128, 00:20:45.301 "io_size": 4096, 00:20:45.301 "runtime": 1.016921, 00:20:45.301 "iops": 4925.653025161247, 00:20:45.301 "mibps": 19.24083212953612, 00:20:45.301 "io_failed": 0, 00:20:45.301 "io_timeout": 0, 00:20:45.301 "avg_latency_us": 25771.499386437747, 00:20:45.301 "min_latency_us": 4532.906666666667, 00:20:45.301 "max_latency_us": 72089.6 00:20:45.301 } 00:20:45.301 ], 00:20:45.301 "core_count": 1 00:20:45.301 } 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:45.301 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.302 nvmf_trace.0 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3551525 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3551525 ']' 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3551525 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.302 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551525 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551525' 00:20:45.561 killing process with pid 3551525 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3551525 00:20:45.561 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.561 00:20:45.561 Latency(us) 00:20:45.561 [2024-12-09T10:35:37.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.561 [2024-12-09T10:35:37.723Z] =================================================================================================================== 00:20:45.561 [2024-12-09T10:35:37.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3551525 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.561 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.562 rmmod nvme_tcp 00:20:45.562 rmmod nvme_fabrics 00:20:45.562 rmmod nvme_keyring 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3551409 ']' 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3551409 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3551409 ']' 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3551409 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551409 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551409' 00:20:45.562 killing process with pid 3551409 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3551409 00:20:45.562 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3551409 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.821 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gFK73LExL7 /tmp/tmp.uBUfthLvJ1 /tmp/tmp.GONscGFP1k 00:20:48.364 00:20:48.364 real 1m21.796s 00:20:48.364 user 2m5.516s 00:20:48.364 sys 0m27.534s 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.364 ************************************ 00:20:48.364 END TEST nvmf_tls 00:20:48.364 ************************************ 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.364 ************************************ 00:20:48.364 START TEST nvmf_fips 00:20:48.364 ************************************ 00:20:48.364 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:48.364 * Looking for test storage... 00:20:48.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.364 --rc genhtml_branch_coverage=1 00:20:48.364 --rc genhtml_function_coverage=1 00:20:48.364 --rc genhtml_legend=1 00:20:48.364 --rc geninfo_all_blocks=1 00:20:48.364 --rc geninfo_unexecuted_blocks=1 00:20:48.364 00:20:48.364 ' 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.364 --rc genhtml_branch_coverage=1 00:20:48.364 --rc genhtml_function_coverage=1 00:20:48.364 --rc genhtml_legend=1 00:20:48.364 --rc geninfo_all_blocks=1 00:20:48.364 --rc geninfo_unexecuted_blocks=1 00:20:48.364 00:20:48.364 ' 00:20:48.364 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.364 --rc genhtml_branch_coverage=1 00:20:48.365 --rc genhtml_function_coverage=1 00:20:48.365 --rc genhtml_legend=1 00:20:48.365 --rc geninfo_all_blocks=1 00:20:48.365 --rc geninfo_unexecuted_blocks=1 00:20:48.365 00:20:48.365 ' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.365 --rc genhtml_branch_coverage=1 00:20:48.365 --rc genhtml_function_coverage=1 00:20:48.365 --rc genhtml_legend=1 00:20:48.365 --rc geninfo_all_blocks=1 00:20:48.365 --rc geninfo_unexecuted_blocks=1 00:20:48.365 00:20:48.365 ' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:48.365 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:48.366 Error setting digest 00:20:48.366 408225D0537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:48.366 408225D0537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.366 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:56.506 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:56.506 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:56.506 Found net devices under 0000:31:00.0: cvl_0_0 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:56.506 Found net devices under 0000:31:00.1: cvl_0_1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:20:56.506 00:20:56.506 --- 10.0.0.2 ping statistics --- 00:20:56.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.506 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:56.506 00:20:56.506 --- 10.0.0.1 ping statistics --- 00:20:56.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.506 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.506 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3556522 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3556522 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3556522 ']' 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.507 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.507 [2024-12-09 11:35:47.915143] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:56.507 [2024-12-09 11:35:47.915200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.507 [2024-12-09 11:35:48.010660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.507 [2024-12-09 11:35:48.050046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.507 [2024-12-09 11:35:48.050088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.507 [2024-12-09 11:35:48.050096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.507 [2024-12-09 11:35:48.050103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.507 [2024-12-09 11:35:48.050109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.507 [2024-12-09 11:35:48.050815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nTe 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nTe 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nTe 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nTe 00:20:56.768 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.768 [2024-12-09 11:35:48.920288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.029 [2024-12-09 11:35:48.936280] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.029 [2024-12-09 11:35:48.936641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.029 malloc0 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3556595 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3556595 /var/tmp/bdevperf.sock 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3556595 ']' 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:57.029 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.029 [2024-12-09 11:35:49.078559] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:20:57.029 [2024-12-09 11:35:49.078638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556595 ] 00:20:57.029 [2024-12-09 11:35:49.145226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.029 [2024-12-09 11:35:49.181669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.973 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.973 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:57.973 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nTe 00:20:57.973 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:58.233 [2024-12-09 11:35:50.206594] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.233 TLSTESTn1 00:20:58.233 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:58.494 Running I/O for 10 seconds... 00:21:00.427 5123.00 IOPS, 20.01 MiB/s [2024-12-09T10:35:53.532Z] 5026.00 IOPS, 19.63 MiB/s [2024-12-09T10:35:54.473Z] 5043.33 IOPS, 19.70 MiB/s [2024-12-09T10:35:55.415Z] 4999.75 IOPS, 19.53 MiB/s [2024-12-09T10:35:56.799Z] 4975.80 IOPS, 19.44 MiB/s [2024-12-09T10:35:57.740Z] 4912.00 IOPS, 19.19 MiB/s [2024-12-09T10:35:58.683Z] 4947.14 IOPS, 19.32 MiB/s [2024-12-09T10:35:59.626Z] 4942.38 IOPS, 19.31 MiB/s [2024-12-09T10:36:00.570Z] 4945.44 IOPS, 19.32 MiB/s [2024-12-09T10:36:00.570Z] 4930.00 IOPS, 19.26 MiB/s 00:21:08.408 Latency(us) 00:21:08.408 [2024-12-09T10:36:00.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.408 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.408 Verification LBA range: start 0x0 length 0x2000 00:21:08.408 TLSTESTn1 : 10.01 4935.40 19.28 0.00 0.00 25902.28 4997.12 43253.76 00:21:08.408 [2024-12-09T10:36:00.570Z] =================================================================================================================== 00:21:08.408 [2024-12-09T10:36:00.570Z] Total : 4935.40 19.28 0.00 0.00 25902.28 4997.12 43253.76 00:21:08.408 { 00:21:08.408 "results": [ 00:21:08.408 { 00:21:08.408 "job": "TLSTESTn1", 00:21:08.408 "core_mask": "0x4", 00:21:08.408 "workload": "verify", 00:21:08.408 "status": "finished", 00:21:08.408 "verify_range": { 00:21:08.408 "start": 0, 00:21:08.408 "length": 8192 00:21:08.408 }, 00:21:08.408 "queue_depth": 128, 00:21:08.408 "io_size": 4096, 00:21:08.408 "runtime": 10.014995, 00:21:08.408 "iops": 4935.399368646714, 00:21:08.408 "mibps": 19.278903783776226, 00:21:08.408 "io_failed": 0, 00:21:08.408 "io_timeout": 0, 00:21:08.408 "avg_latency_us": 25902.277087750535, 00:21:08.408 "min_latency_us": 4997.12, 00:21:08.408 "max_latency_us": 43253.76 00:21:08.408 } 00:21:08.408 ], 00:21:08.408 "core_count": 1 00:21:08.408 } 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:08.408 nvmf_trace.0 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3556595 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3556595 ']' 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3556595 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:08.408 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3556595 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3556595' 00:21:08.670 killing process with pid 3556595 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3556595 00:21:08.670 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.670 00:21:08.670 Latency(us) 00:21:08.670 [2024-12-09T10:36:00.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.670 [2024-12-09T10:36:00.832Z] =================================================================================================================== 00:21:08.670 [2024-12-09T10:36:00.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3556595 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.670 rmmod nvme_tcp 00:21:08.670 rmmod nvme_fabrics 00:21:08.670 rmmod nvme_keyring 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3556522 ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3556522 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3556522 ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3556522 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.670 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3556522 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3556522' 00:21:08.931 killing process with pid 3556522 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3556522 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3556522 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.931 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nTe 00:21:11.483 00:21:11.483 real 0m23.049s 00:21:11.483 user 0m24.205s 00:21:11.483 sys 0m10.063s 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:11.483 ************************************ 00:21:11.483 END TEST nvmf_fips 00:21:11.483 ************************************ 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.483 ************************************ 00:21:11.483 START TEST nvmf_control_msg_list 00:21:11.483 ************************************ 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:11.483 * Looking for test storage... 00:21:11.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:11.483 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.484 --rc genhtml_branch_coverage=1 00:21:11.484 --rc genhtml_function_coverage=1 00:21:11.484 --rc genhtml_legend=1 00:21:11.484 --rc geninfo_all_blocks=1 00:21:11.484 --rc geninfo_unexecuted_blocks=1 00:21:11.484 00:21:11.484 ' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.484 --rc genhtml_branch_coverage=1 00:21:11.484 --rc genhtml_function_coverage=1 00:21:11.484 --rc genhtml_legend=1 00:21:11.484 --rc geninfo_all_blocks=1 00:21:11.484 --rc geninfo_unexecuted_blocks=1 00:21:11.484 00:21:11.484 ' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.484 --rc genhtml_branch_coverage=1 00:21:11.484 --rc genhtml_function_coverage=1 00:21:11.484 --rc genhtml_legend=1 00:21:11.484 --rc geninfo_all_blocks=1 00:21:11.484 --rc geninfo_unexecuted_blocks=1 00:21:11.484 00:21:11.484 ' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.484 --rc genhtml_branch_coverage=1 00:21:11.484 --rc genhtml_function_coverage=1 00:21:11.484 --rc genhtml_legend=1 00:21:11.484 --rc geninfo_all_blocks=1 00:21:11.484 --rc geninfo_unexecuted_blocks=1 00:21:11.484 00:21:11.484 ' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.484 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.621 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:19.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:19.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:19.622 Found net devices under 0000:31:00.0: cvl_0_0 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:19.622 Found net devices under 0000:31:00.1: cvl_0_1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:21:19.622 00:21:19.622 --- 10.0.0.2 ping statistics --- 00:21:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.622 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:21:19.622 00:21:19.622 --- 10.0.0.1 ping statistics --- 00:21:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.622 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3563822 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3563822 00:21:19.622 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3563822 ']' 00:21:19.623 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.623 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.623 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.623 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.623 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.623 [2024-12-09 11:36:10.900741] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:21:19.623 [2024-12-09 11:36:10.900808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.623 [2024-12-09 11:36:10.984987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.623 [2024-12-09 11:36:11.026522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.623 [2024-12-09 11:36:11.026560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.623 [2024-12-09 11:36:11.026568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.623 [2024-12-09 11:36:11.026575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.623 [2024-12-09 11:36:11.026580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.623 [2024-12-09 11:36:11.027166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.623 [2024-12-09 11:36:11.746522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.623 Malloc0 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.623 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.889 [2024-12-09 11:36:11.797490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3563995 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3563997 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3563998 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3563995 00:21:19.889 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.889 [2024-12-09 11:36:11.857893] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:19.889 [2024-12-09 11:36:11.867835] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:19.889 [2024-12-09 11:36:11.877808] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:20.830 Initializing NVMe Controllers 00:21:20.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:20.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:20.830 Initialization complete. Launching workers. 00:21:20.830 ======================================================== 00:21:20.830 Latency(us) 00:21:20.830 Device Information : IOPS MiB/s Average min max 00:21:20.830 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40906.52 40827.52 41023.26 00:21:20.830 ======================================================== 00:21:20.830 Total : 25.00 0.10 40906.52 40827.52 41023.26 00:21:20.830 00:21:20.830 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3563997 00:21:21.091 Initializing NVMe Controllers 00:21:21.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:21.091 Initialization complete. Launching workers. 00:21:21.091 ======================================================== 00:21:21.091 Latency(us) 00:21:21.091 Device Information : IOPS MiB/s Average min max 00:21:21.091 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40902.55 40779.34 41015.93 00:21:21.091 ======================================================== 00:21:21.091 Total : 25.00 0.10 40902.55 40779.34 41015.93 00:21:21.091 00:21:21.091 Initializing NVMe Controllers 00:21:21.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:21.091 Initialization complete. Launching workers. 00:21:21.091 ======================================================== 00:21:21.091 Latency(us) 00:21:21.091 Device Information : IOPS MiB/s Average min max 00:21:21.091 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40931.36 40811.16 41549.32 00:21:21.091 ======================================================== 00:21:21.091 Total : 25.00 0.10 40931.36 40811.16 41549.32 00:21:21.091 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3563998 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.091 rmmod nvme_tcp 00:21:21.091 rmmod nvme_fabrics 00:21:21.091 rmmod nvme_keyring 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3563822 ']' 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3563822 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3563822 ']' 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3563822 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.091 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563822 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563822' 00:21:21.352 killing process with pid 3563822 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3563822 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3563822 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.352 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.896 00:21:23.896 real 0m12.339s 00:21:23.896 user 0m7.947s 00:21:23.896 sys 0m6.485s 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:23.896 ************************************ 00:21:23.896 END TEST nvmf_control_msg_list 00:21:23.896 ************************************ 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.896 ************************************ 00:21:23.896 START TEST nvmf_wait_for_buf 00:21:23.896 ************************************ 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:23.896 * Looking for test storage... 00:21:23.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:23.896 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.897 --rc genhtml_branch_coverage=1 00:21:23.897 --rc genhtml_function_coverage=1 00:21:23.897 --rc genhtml_legend=1 00:21:23.897 --rc geninfo_all_blocks=1 00:21:23.897 --rc geninfo_unexecuted_blocks=1 00:21:23.897 00:21:23.897 ' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.897 --rc genhtml_branch_coverage=1 00:21:23.897 --rc genhtml_function_coverage=1 00:21:23.897 --rc genhtml_legend=1 00:21:23.897 --rc geninfo_all_blocks=1 00:21:23.897 --rc geninfo_unexecuted_blocks=1 00:21:23.897 00:21:23.897 ' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.897 --rc genhtml_branch_coverage=1 00:21:23.897 --rc genhtml_function_coverage=1 00:21:23.897 --rc genhtml_legend=1 00:21:23.897 --rc geninfo_all_blocks=1 00:21:23.897 --rc geninfo_unexecuted_blocks=1 00:21:23.897 00:21:23.897 ' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:23.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.897 --rc genhtml_branch_coverage=1 00:21:23.897 --rc genhtml_function_coverage=1 00:21:23.897 --rc genhtml_legend=1 00:21:23.897 --rc geninfo_all_blocks=1 00:21:23.897 --rc geninfo_unexecuted_blocks=1 00:21:23.897 00:21:23.897 ' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.897 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.898 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.038 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:32.039 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:32.039 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:32.039 Found net devices under 0000:31:00.0: cvl_0_0 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:32.039 Found net devices under 0000:31:00.1: cvl_0_1 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.039 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:32.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:21:32.039 00:21:32.039 --- 10.0.0.2 ping statistics --- 00:21:32.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.039 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:21:32.039 00:21:32.039 --- 10.0.0.1 ping statistics --- 00:21:32.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.039 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:32.039 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3568589 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3568589 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3568589 ']' 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.040 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 [2024-12-09 11:36:23.285201] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:21:32.040 [2024-12-09 11:36:23.285265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.040 [2024-12-09 11:36:23.369292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.040 [2024-12-09 11:36:23.409825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.040 [2024-12-09 11:36:23.409865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.040 [2024-12-09 11:36:23.409873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.040 [2024-12-09 11:36:23.409884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.040 [2024-12-09 11:36:23.409889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.040 [2024-12-09 11:36:23.410485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.040 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.301 Malloc0 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.301 [2024-12-09 11:36:24.214516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:32.301 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.302 [2024-12-09 11:36:24.250747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.302 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:32.302 [2024-12-09 11:36:24.353086] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:33.689 Initializing NVMe Controllers 00:21:33.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:33.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:33.689 Initialization complete. Launching workers. 00:21:33.689 ======================================================== 00:21:33.689 Latency(us) 00:21:33.689 Device Information : IOPS MiB/s Average min max 00:21:33.689 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32264.75 8001.51 63850.36 00:21:33.689 ======================================================== 00:21:33.689 Total : 129.00 16.12 32264.75 8001.51 63850.36 00:21:33.689 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.950 rmmod nvme_tcp 00:21:33.950 rmmod nvme_fabrics 00:21:33.950 rmmod nvme_keyring 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3568589 ']' 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3568589 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3568589 ']' 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3568589 00:21:33.950 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3568589 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3568589' 00:21:33.950 killing process with pid 3568589 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3568589 00:21:33.950 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3568589 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.211 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.126 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:36.126 00:21:36.126 real 0m12.715s 00:21:36.126 user 0m5.118s 00:21:36.126 sys 0m6.146s 00:21:36.126 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.126 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:36.126 ************************************ 00:21:36.126 END TEST nvmf_wait_for_buf 00:21:36.126 ************************************ 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.387 11:36:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:44.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:44.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.529 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:44.530 Found net devices under 0000:31:00.0: cvl_0_0 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:44.530 Found net devices under 0000:31:00.1: cvl_0_1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.530 ************************************ 00:21:44.530 START TEST nvmf_perf_adq 00:21:44.530 ************************************ 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:44.530 * Looking for test storage... 00:21:44.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.530 --rc genhtml_branch_coverage=1 00:21:44.530 --rc genhtml_function_coverage=1 00:21:44.530 --rc genhtml_legend=1 00:21:44.530 --rc geninfo_all_blocks=1 00:21:44.530 --rc geninfo_unexecuted_blocks=1 00:21:44.530 00:21:44.530 ' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.530 --rc genhtml_branch_coverage=1 00:21:44.530 --rc genhtml_function_coverage=1 00:21:44.530 --rc genhtml_legend=1 00:21:44.530 --rc geninfo_all_blocks=1 00:21:44.530 --rc geninfo_unexecuted_blocks=1 00:21:44.530 00:21:44.530 ' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.530 --rc genhtml_branch_coverage=1 00:21:44.530 --rc genhtml_function_coverage=1 00:21:44.530 --rc genhtml_legend=1 00:21:44.530 --rc geninfo_all_blocks=1 00:21:44.530 --rc geninfo_unexecuted_blocks=1 00:21:44.530 00:21:44.530 ' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:44.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.530 --rc genhtml_branch_coverage=1 00:21:44.530 --rc genhtml_function_coverage=1 00:21:44.530 --rc genhtml_legend=1 00:21:44.530 --rc geninfo_all_blocks=1 00:21:44.530 --rc geninfo_unexecuted_blocks=1 00:21:44.530 00:21:44.530 ' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.530 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.531 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:51.117 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:51.117 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:51.117 Found net devices under 0000:31:00.0: cvl_0_0 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:51.117 Found net devices under 0000:31:00.1: cvl_0_1 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:51.117 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:52.060 11:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:54.604 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:59.898 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:59.899 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:59.899 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:59.899 Found net devices under 0000:31:00.0: cvl_0_0 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:59.899 Found net devices under 0000:31:00.1: cvl_0_1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:21:59.899 00:21:59.899 --- 10.0.0.2 ping statistics --- 00:21:59.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.899 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:21:59.899 00:21:59.899 --- 10.0.0.1 ping statistics --- 00:21:59.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.899 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3578969 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3578969 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3578969 ']' 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.899 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 [2024-12-09 11:36:51.611912] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:21:59.900 [2024-12-09 11:36:51.611961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.900 [2024-12-09 11:36:51.691373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.900 [2024-12-09 11:36:51.728816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.900 [2024-12-09 11:36:51.728850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.900 [2024-12-09 11:36:51.728858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.900 [2024-12-09 11:36:51.728865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.900 [2024-12-09 11:36:51.728871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.900 [2024-12-09 11:36:51.733028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.900 [2024-12-09 11:36:51.733068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.900 [2024-12-09 11:36:51.733205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.900 [2024-12-09 11:36:51.733357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 [2024-12-09 11:36:51.995130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 Malloc1 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.161 [2024-12-09 11:36:52.063525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.161 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.161 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3578995 00:22:00.161 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:00.161 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:02.078 "tick_rate": 2400000000, 00:22:02.078 "poll_groups": [ 00:22:02.078 { 00:22:02.078 "name": "nvmf_tgt_poll_group_000", 00:22:02.078 "admin_qpairs": 1, 00:22:02.078 "io_qpairs": 1, 00:22:02.078 "current_admin_qpairs": 1, 00:22:02.078 "current_io_qpairs": 1, 00:22:02.078 "pending_bdev_io": 0, 00:22:02.078 "completed_nvme_io": 19567, 00:22:02.078 "transports": [ 00:22:02.078 { 00:22:02.078 "trtype": "TCP" 00:22:02.078 } 00:22:02.078 ] 00:22:02.078 }, 00:22:02.078 { 00:22:02.078 "name": "nvmf_tgt_poll_group_001", 00:22:02.078 "admin_qpairs": 0, 00:22:02.078 "io_qpairs": 1, 00:22:02.078 "current_admin_qpairs": 0, 00:22:02.078 "current_io_qpairs": 1, 00:22:02.078 "pending_bdev_io": 0, 00:22:02.078 "completed_nvme_io": 28473, 00:22:02.078 "transports": [ 00:22:02.078 { 00:22:02.078 "trtype": "TCP" 00:22:02.078 } 00:22:02.078 ] 00:22:02.078 }, 00:22:02.078 { 00:22:02.078 "name": "nvmf_tgt_poll_group_002", 00:22:02.078 "admin_qpairs": 0, 00:22:02.078 "io_qpairs": 1, 00:22:02.078 "current_admin_qpairs": 0, 00:22:02.078 "current_io_qpairs": 1, 00:22:02.078 "pending_bdev_io": 0, 00:22:02.078 "completed_nvme_io": 20133, 00:22:02.078 "transports": [ 00:22:02.078 { 00:22:02.078 "trtype": "TCP" 00:22:02.078 } 00:22:02.078 ] 00:22:02.078 }, 00:22:02.078 { 00:22:02.078 "name": "nvmf_tgt_poll_group_003", 00:22:02.078 "admin_qpairs": 0, 00:22:02.078 "io_qpairs": 1, 00:22:02.078 "current_admin_qpairs": 0, 00:22:02.078 "current_io_qpairs": 1, 00:22:02.078 "pending_bdev_io": 0, 00:22:02.078 "completed_nvme_io": 20576, 00:22:02.078 "transports": [ 00:22:02.078 { 00:22:02.078 "trtype": "TCP" 00:22:02.078 } 00:22:02.078 ] 00:22:02.078 } 00:22:02.078 ] 00:22:02.078 }' 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:02.078 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3578995 00:22:10.212 Initializing NVMe Controllers 00:22:10.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:10.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:10.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:10.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:10.212 Initialization complete. Launching workers. 00:22:10.212 ======================================================== 00:22:10.212 Latency(us) 00:22:10.212 Device Information : IOPS MiB/s Average min max 00:22:10.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11271.35 44.03 5684.13 1370.97 43463.98 00:22:10.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15111.03 59.03 4235.07 1260.60 9523.71 00:22:10.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13735.84 53.66 4674.00 1126.82 44648.18 00:22:10.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12885.84 50.34 4966.72 1124.71 11367.71 00:22:10.213 ======================================================== 00:22:10.213 Total : 53004.07 207.05 4834.83 1124.71 44648.18 00:22:10.213 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.213 rmmod nvme_tcp 00:22:10.213 rmmod nvme_fabrics 00:22:10.213 rmmod nvme_keyring 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3578969 ']' 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3578969 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3578969 ']' 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3578969 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.213 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3578969 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3578969' 00:22:10.473 killing process with pid 3578969 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3578969 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3578969 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.473 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.015 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.015 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:13.015 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:13.015 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:13.955 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:16.495 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.784 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:21.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:21.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:21.785 Found net devices under 0000:31:00.0: cvl_0_0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:21.785 Found net devices under 0000:31:00.1: cvl_0_1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:22:21.785 00:22:21.785 --- 10.0.0.2 ping statistics --- 00:22:21.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.785 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:21.785 00:22:21.785 --- 10.0.0.1 ping statistics --- 00:22:21.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.785 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:21.785 net.core.busy_poll = 1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:21.785 net.core.busy_read = 1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3583776 00:22:21.785 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3583776 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3583776 ']' 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.786 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.786 [2024-12-09 11:37:13.923706] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:21.786 [2024-12-09 11:37:13.923763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.046 [2024-12-09 11:37:14.005803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.046 [2024-12-09 11:37:14.046085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.046 [2024-12-09 11:37:14.046126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.046 [2024-12-09 11:37:14.046134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.046 [2024-12-09 11:37:14.046140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.046 [2024-12-09 11:37:14.046146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.046 [2024-12-09 11:37:14.047903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.046 [2024-12-09 11:37:14.048045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.046 [2024-12-09 11:37:14.048204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.046 [2024-12-09 11:37:14.048205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.618 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.879 [2024-12-09 11:37:14.899434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.879 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 Malloc1 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.880 [2024-12-09 11:37:14.974458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3583871 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:22.880 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:25.427 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:25.427 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.427 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:25.427 "tick_rate": 2400000000, 00:22:25.427 "poll_groups": [ 00:22:25.427 { 00:22:25.427 "name": "nvmf_tgt_poll_group_000", 00:22:25.427 "admin_qpairs": 1, 00:22:25.427 "io_qpairs": 1, 00:22:25.427 "current_admin_qpairs": 1, 00:22:25.427 "current_io_qpairs": 1, 00:22:25.427 "pending_bdev_io": 0, 00:22:25.427 "completed_nvme_io": 28496, 00:22:25.427 "transports": [ 00:22:25.427 { 00:22:25.427 "trtype": "TCP" 00:22:25.427 } 00:22:25.427 ] 00:22:25.427 }, 00:22:25.427 { 00:22:25.427 "name": "nvmf_tgt_poll_group_001", 00:22:25.427 "admin_qpairs": 0, 00:22:25.427 "io_qpairs": 3, 00:22:25.427 "current_admin_qpairs": 0, 00:22:25.427 "current_io_qpairs": 3, 00:22:25.427 "pending_bdev_io": 0, 00:22:25.427 "completed_nvme_io": 41483, 00:22:25.427 "transports": [ 00:22:25.427 { 00:22:25.427 "trtype": "TCP" 00:22:25.427 } 00:22:25.427 ] 00:22:25.427 }, 00:22:25.427 { 00:22:25.427 "name": "nvmf_tgt_poll_group_002", 00:22:25.427 "admin_qpairs": 0, 00:22:25.427 "io_qpairs": 0, 00:22:25.427 "current_admin_qpairs": 0, 00:22:25.427 "current_io_qpairs": 0, 00:22:25.427 "pending_bdev_io": 0, 00:22:25.427 "completed_nvme_io": 0, 00:22:25.427 "transports": [ 00:22:25.427 { 00:22:25.427 "trtype": "TCP" 00:22:25.427 } 00:22:25.427 ] 00:22:25.427 }, 00:22:25.427 { 00:22:25.427 "name": "nvmf_tgt_poll_group_003", 00:22:25.427 "admin_qpairs": 0, 00:22:25.427 "io_qpairs": 0, 00:22:25.427 "current_admin_qpairs": 0, 00:22:25.427 "current_io_qpairs": 0, 00:22:25.427 "pending_bdev_io": 0, 00:22:25.427 "completed_nvme_io": 0, 00:22:25.427 "transports": [ 00:22:25.427 { 00:22:25.427 "trtype": "TCP" 00:22:25.427 } 00:22:25.427 ] 00:22:25.427 } 00:22:25.427 ] 00:22:25.427 }' 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:25.427 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3583871 00:22:33.559 Initializing NVMe Controllers 00:22:33.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:33.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:33.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:33.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:33.559 Initialization complete. Launching workers. 00:22:33.559 ======================================================== 00:22:33.559 Latency(us) 00:22:33.559 Device Information : IOPS MiB/s Average min max 00:22:33.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6200.00 24.22 10324.26 1097.29 55026.70 00:22:33.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 20086.79 78.46 3185.80 1055.43 46711.65 00:22:33.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6097.90 23.82 10497.69 1480.52 55053.18 00:22:33.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8875.60 34.67 7211.99 851.48 52566.24 00:22:33.559 ======================================================== 00:22:33.559 Total : 41260.29 161.17 6205.18 851.48 55053.18 00:22:33.559 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.559 rmmod nvme_tcp 00:22:33.559 rmmod nvme_fabrics 00:22:33.559 rmmod nvme_keyring 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3583776 ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3583776 ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3583776' 00:22:33.559 killing process with pid 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3583776 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.559 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:36.863 00:22:36.863 real 0m52.983s 00:22:36.863 user 2m46.718s 00:22:36.863 sys 0m11.390s 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.863 ************************************ 00:22:36.863 END TEST nvmf_perf_adq 00:22:36.863 ************************************ 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.863 ************************************ 00:22:36.863 START TEST nvmf_shutdown 00:22:36.863 ************************************ 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:36.863 * Looking for test storage... 00:22:36.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.864 ************************************ 00:22:36.864 START TEST nvmf_shutdown_tc1 00:22:36.864 ************************************ 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.864 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.007 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:45.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:45.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:45.008 Found net devices under 0000:31:00.0: cvl_0_0 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:45.008 Found net devices under 0000:31:00.1: cvl_0_1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.008 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:22:45.009 00:22:45.009 --- 10.0.0.2 ping statistics --- 00:22:45.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.009 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:45.009 00:22:45.009 --- 10.0.0.1 ping statistics --- 00:22:45.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.009 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3590641 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3590641 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3590641 ']' 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.009 11:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.009 [2024-12-09 11:37:36.521631] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:45.009 [2024-12-09 11:37:36.521701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.009 [2024-12-09 11:37:36.622779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.009 [2024-12-09 11:37:36.674369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.009 [2024-12-09 11:37:36.674422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.009 [2024-12-09 11:37:36.674430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.009 [2024-12-09 11:37:36.674437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.009 [2024-12-09 11:37:36.674443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.009 [2024-12-09 11:37:36.676684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.009 [2024-12-09 11:37:36.676852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.009 [2024-12-09 11:37:36.677024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.009 [2024-12-09 11:37:36.677039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.270 [2024-12-09 11:37:37.358417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.270 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.531 Malloc1 00:22:45.531 [2024-12-09 11:37:37.473337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.531 Malloc2 00:22:45.531 Malloc3 00:22:45.531 Malloc4 00:22:45.531 Malloc5 00:22:45.531 Malloc6 00:22:45.531 Malloc7 00:22:45.791 Malloc8 00:22:45.791 Malloc9 00:22:45.791 Malloc10 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3590882 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3590882 /var/tmp/bdevperf.sock 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3590882 ']' 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.791 { 00:22:45.791 "params": { 00:22:45.791 "name": "Nvme$subsystem", 00:22:45.791 "trtype": "$TEST_TRANSPORT", 00:22:45.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.791 "adrfam": "ipv4", 00:22:45.791 "trsvcid": "$NVMF_PORT", 00:22:45.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.791 "hdgst": ${hdgst:-false}, 00:22:45.791 "ddgst": ${ddgst:-false} 00:22:45.791 }, 00:22:45.791 "method": "bdev_nvme_attach_controller" 00:22:45.791 } 00:22:45.791 EOF 00:22:45.791 )") 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.791 { 00:22:45.791 "params": { 00:22:45.791 "name": "Nvme$subsystem", 00:22:45.791 "trtype": "$TEST_TRANSPORT", 00:22:45.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.791 "adrfam": "ipv4", 00:22:45.791 "trsvcid": "$NVMF_PORT", 00:22:45.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.791 "hdgst": ${hdgst:-false}, 00:22:45.791 "ddgst": ${ddgst:-false} 00:22:45.791 }, 00:22:45.791 "method": "bdev_nvme_attach_controller" 00:22:45.791 } 00:22:45.791 EOF 00:22:45.791 )") 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.791 { 00:22:45.791 "params": { 00:22:45.791 "name": "Nvme$subsystem", 00:22:45.791 "trtype": "$TEST_TRANSPORT", 00:22:45.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.791 "adrfam": "ipv4", 00:22:45.791 "trsvcid": "$NVMF_PORT", 00:22:45.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.791 "hdgst": ${hdgst:-false}, 00:22:45.791 "ddgst": ${ddgst:-false} 00:22:45.791 }, 00:22:45.791 "method": "bdev_nvme_attach_controller" 00:22:45.791 } 00:22:45.791 EOF 00:22:45.791 )") 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.791 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.792 [2024-12-09 11:37:37.936249] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:45.792 [2024-12-09 11:37:37.936318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.792 { 00:22:45.792 "params": { 00:22:45.792 "name": "Nvme$subsystem", 00:22:45.792 "trtype": "$TEST_TRANSPORT", 00:22:45.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.792 "adrfam": "ipv4", 00:22:45.792 "trsvcid": "$NVMF_PORT", 00:22:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.792 "hdgst": ${hdgst:-false}, 00:22:45.792 "ddgst": ${ddgst:-false} 00:22:45.792 }, 00:22:45.792 "method": "bdev_nvme_attach_controller" 00:22:45.792 } 00:22:45.792 EOF 00:22:45.792 )") 00:22:45.792 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.053 { 00:22:46.053 "params": { 00:22:46.053 "name": "Nvme$subsystem", 00:22:46.053 "trtype": "$TEST_TRANSPORT", 00:22:46.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.053 "adrfam": "ipv4", 00:22:46.053 "trsvcid": "$NVMF_PORT", 00:22:46.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.053 "hdgst": ${hdgst:-false}, 00:22:46.053 "ddgst": ${ddgst:-false} 00:22:46.053 }, 00:22:46.053 "method": "bdev_nvme_attach_controller" 00:22:46.053 } 00:22:46.053 EOF 00:22:46.053 )") 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:46.053 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:46.053 "params": { 00:22:46.053 "name": "Nvme1", 00:22:46.053 "trtype": "tcp", 00:22:46.053 "traddr": "10.0.0.2", 00:22:46.053 "adrfam": "ipv4", 00:22:46.053 "trsvcid": "4420", 00:22:46.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme2", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme3", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme4", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme5", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme6", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme7", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme8", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme9", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 },{ 00:22:46.054 "params": { 00:22:46.054 "name": "Nvme10", 00:22:46.054 "trtype": "tcp", 00:22:46.054 "traddr": "10.0.0.2", 00:22:46.054 "adrfam": "ipv4", 00:22:46.054 "trsvcid": "4420", 00:22:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:46.054 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:46.054 "hdgst": false, 00:22:46.054 "ddgst": false 00:22:46.054 }, 00:22:46.054 "method": "bdev_nvme_attach_controller" 00:22:46.054 }' 00:22:46.054 [2024-12-09 11:37:38.012095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.054 [2024-12-09 11:37:38.048553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3590882 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:47.438 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:48.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3590882 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3590641 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.380 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.380 { 00:22:48.380 "params": { 00:22:48.380 "name": "Nvme$subsystem", 00:22:48.380 "trtype": "$TEST_TRANSPORT", 00:22:48.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.380 "adrfam": "ipv4", 00:22:48.380 "trsvcid": "$NVMF_PORT", 00:22:48.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.380 "hdgst": ${hdgst:-false}, 00:22:48.380 "ddgst": ${ddgst:-false} 00:22:48.380 }, 00:22:48.380 "method": "bdev_nvme_attach_controller" 00:22:48.380 } 00:22:48.380 EOF 00:22:48.380 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 [2024-12-09 11:37:40.460294] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:48.381 [2024-12-09 11:37:40.460359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591413 ] 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:48.381 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:48.381 { 00:22:48.381 "params": { 00:22:48.381 "name": "Nvme$subsystem", 00:22:48.381 "trtype": "$TEST_TRANSPORT", 00:22:48.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.381 "adrfam": "ipv4", 00:22:48.381 "trsvcid": "$NVMF_PORT", 00:22:48.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.381 "hdgst": ${hdgst:-false}, 00:22:48.381 "ddgst": ${ddgst:-false} 00:22:48.381 }, 00:22:48.381 "method": "bdev_nvme_attach_controller" 00:22:48.381 } 00:22:48.381 EOF 00:22:48.381 )") 00:22:48.382 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:48.382 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:48.382 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:48.382 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme1", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme2", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme3", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme4", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme5", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme6", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme7", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme8", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme9", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 },{ 00:22:48.382 "params": { 00:22:48.382 "name": "Nvme10", 00:22:48.382 "trtype": "tcp", 00:22:48.382 "traddr": "10.0.0.2", 00:22:48.382 "adrfam": "ipv4", 00:22:48.382 "trsvcid": "4420", 00:22:48.382 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.382 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.382 "hdgst": false, 00:22:48.382 "ddgst": false 00:22:48.382 }, 00:22:48.382 "method": "bdev_nvme_attach_controller" 00:22:48.382 }' 00:22:48.382 [2024-12-09 11:37:40.535062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.644 [2024-12-09 11:37:40.571409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.030 Running I/O for 1 seconds... 00:22:50.971 1864.00 IOPS, 116.50 MiB/s 00:22:50.971 Latency(us) 00:22:50.971 [2024-12-09T10:37:43.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.971 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme1n1 : 1.18 217.81 13.61 0.00 0.00 290883.20 22500.69 270882.13 00:22:50.971 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme2n1 : 1.05 183.67 11.48 0.00 0.00 338482.35 19770.03 279620.27 00:22:50.971 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme3n1 : 1.06 242.56 15.16 0.00 0.00 251567.57 20753.07 235929.60 00:22:50.971 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme4n1 : 1.17 276.36 17.27 0.00 0.00 217967.33 2048.00 239424.85 00:22:50.971 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme5n1 : 1.17 219.70 13.73 0.00 0.00 269570.99 17367.04 251658.24 00:22:50.971 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme6n1 : 1.16 220.27 13.77 0.00 0.00 263933.65 16056.32 241172.48 00:22:50.971 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme7n1 : 1.19 269.87 16.87 0.00 0.00 212105.39 21299.20 241172.48 00:22:50.971 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.971 Verification LBA range: start 0x0 length 0x400 00:22:50.971 Nvme8n1 : 1.22 264.99 16.56 0.00 0.00 205108.89 3741.01 248162.99 00:22:50.971 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.972 Verification LBA range: start 0x0 length 0x400 00:22:50.972 Nvme9n1 : 1.19 268.88 16.80 0.00 0.00 205416.45 16056.32 274377.39 00:22:50.972 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.972 Verification LBA range: start 0x0 length 0x400 00:22:50.972 Nvme10n1 : 1.18 270.97 16.94 0.00 0.00 199698.43 16711.68 272629.76 00:22:50.972 [2024-12-09T10:37:43.134Z] =================================================================================================================== 00:22:50.972 [2024-12-09T10:37:43.134Z] Total : 2435.08 152.19 0.00 0.00 239029.88 2048.00 279620.27 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.232 rmmod nvme_tcp 00:22:51.232 rmmod nvme_fabrics 00:22:51.232 rmmod nvme_keyring 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3590641 ']' 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3590641 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3590641 ']' 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3590641 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.232 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590641 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590641' 00:22:51.492 killing process with pid 3590641 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3590641 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3590641 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:51.492 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.493 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.038 00:22:54.038 real 0m16.876s 00:22:54.038 user 0m33.780s 00:22:54.038 sys 0m6.981s 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:54.038 ************************************ 00:22:54.038 END TEST nvmf_shutdown_tc1 00:22:54.038 ************************************ 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.038 ************************************ 00:22:54.038 START TEST nvmf_shutdown_tc2 00:22:54.038 ************************************ 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.038 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:54.039 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:54.039 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:54.039 Found net devices under 0000:31:00.0: cvl_0_0 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:54.039 Found net devices under 0000:31:00.1: cvl_0_1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.039 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:22:54.039 00:22:54.039 --- 10.0.0.2 ping statistics --- 00:22:54.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.039 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:22:54.039 00:22:54.039 --- 10.0.0.1 ping statistics --- 00:22:54.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.039 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.039 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3592533 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3592533 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3592533 ']' 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.040 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.300 [2024-12-09 11:37:46.203676] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:54.300 [2024-12-09 11:37:46.203726] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.300 [2024-12-09 11:37:46.298101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.300 [2024-12-09 11:37:46.329741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.300 [2024-12-09 11:37:46.329773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.300 [2024-12-09 11:37:46.329779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.300 [2024-12-09 11:37:46.329784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.300 [2024-12-09 11:37:46.329788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.300 [2024-12-09 11:37:46.331039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.300 [2024-12-09 11:37:46.331249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.300 [2024-12-09 11:37:46.331404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.300 [2024-12-09 11:37:46.331405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.872 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.872 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:54.872 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.872 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.872 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.132 [2024-12-09 11:37:47.053738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.132 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.132 Malloc1 00:22:55.132 [2024-12-09 11:37:47.161831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.132 Malloc2 00:22:55.132 Malloc3 00:22:55.132 Malloc4 00:22:55.393 Malloc5 00:22:55.393 Malloc6 00:22:55.393 Malloc7 00:22:55.393 Malloc8 00:22:55.393 Malloc9 00:22:55.393 Malloc10 00:22:55.393 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.393 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:55.393 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.393 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3592913 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3592913 /var/tmp/bdevperf.sock 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3592913 ']' 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.655 { 00:22:55.655 "params": { 00:22:55.655 "name": "Nvme$subsystem", 00:22:55.655 "trtype": "$TEST_TRANSPORT", 00:22:55.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.655 "adrfam": "ipv4", 00:22:55.655 "trsvcid": "$NVMF_PORT", 00:22:55.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.655 "hdgst": ${hdgst:-false}, 00:22:55.655 "ddgst": ${ddgst:-false} 00:22:55.655 }, 00:22:55.655 "method": "bdev_nvme_attach_controller" 00:22:55.655 } 00:22:55.655 EOF 00:22:55.655 )") 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.655 { 00:22:55.655 "params": { 00:22:55.655 "name": "Nvme$subsystem", 00:22:55.655 "trtype": "$TEST_TRANSPORT", 00:22:55.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.655 "adrfam": "ipv4", 00:22:55.655 "trsvcid": "$NVMF_PORT", 00:22:55.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.655 "hdgst": ${hdgst:-false}, 00:22:55.655 "ddgst": ${ddgst:-false} 00:22:55.655 }, 00:22:55.655 "method": "bdev_nvme_attach_controller" 00:22:55.655 } 00:22:55.655 EOF 00:22:55.655 )") 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.655 { 00:22:55.655 "params": { 00:22:55.655 "name": "Nvme$subsystem", 00:22:55.655 "trtype": "$TEST_TRANSPORT", 00:22:55.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.655 "adrfam": "ipv4", 00:22:55.655 "trsvcid": "$NVMF_PORT", 00:22:55.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.655 "hdgst": ${hdgst:-false}, 00:22:55.655 "ddgst": ${ddgst:-false} 00:22:55.655 }, 00:22:55.655 "method": "bdev_nvme_attach_controller" 00:22:55.655 } 00:22:55.655 EOF 00:22:55.655 )") 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.655 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.655 { 00:22:55.655 "params": { 00:22:55.655 "name": "Nvme$subsystem", 00:22:55.655 "trtype": "$TEST_TRANSPORT", 00:22:55.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.655 "adrfam": "ipv4", 00:22:55.655 "trsvcid": "$NVMF_PORT", 00:22:55.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.655 "hdgst": ${hdgst:-false}, 00:22:55.655 "ddgst": ${ddgst:-false} 00:22:55.655 }, 00:22:55.655 "method": "bdev_nvme_attach_controller" 00:22:55.655 } 00:22:55.655 EOF 00:22:55.655 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 [2024-12-09 11:37:47.607807] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 [2024-12-09 11:37:47.607860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592913 ] 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.656 { 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme$subsystem", 00:22:55.656 "trtype": "$TEST_TRANSPORT", 00:22:55.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "$NVMF_PORT", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.656 "hdgst": ${hdgst:-false}, 00:22:55.656 "ddgst": ${ddgst:-false} 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 } 00:22:55.656 EOF 00:22:55.656 )") 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:55.656 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme1", 00:22:55.656 "trtype": "tcp", 00:22:55.656 "traddr": "10.0.0.2", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "4420", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.656 "hdgst": false, 00:22:55.656 "ddgst": false 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 },{ 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme2", 00:22:55.656 "trtype": "tcp", 00:22:55.656 "traddr": "10.0.0.2", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "4420", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:55.656 "hdgst": false, 00:22:55.656 "ddgst": false 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 },{ 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme3", 00:22:55.656 "trtype": "tcp", 00:22:55.656 "traddr": "10.0.0.2", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "4420", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:55.656 "hdgst": false, 00:22:55.656 "ddgst": false 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 },{ 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme4", 00:22:55.656 "trtype": "tcp", 00:22:55.656 "traddr": "10.0.0.2", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "4420", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:55.656 "hdgst": false, 00:22:55.656 "ddgst": false 00:22:55.656 }, 00:22:55.656 "method": "bdev_nvme_attach_controller" 00:22:55.656 },{ 00:22:55.656 "params": { 00:22:55.656 "name": "Nvme5", 00:22:55.656 "trtype": "tcp", 00:22:55.656 "traddr": "10.0.0.2", 00:22:55.656 "adrfam": "ipv4", 00:22:55.656 "trsvcid": "4420", 00:22:55.656 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:55.656 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:55.656 "hdgst": false, 00:22:55.656 "ddgst": false 00:22:55.656 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 },{ 00:22:55.657 "params": { 00:22:55.657 "name": "Nvme6", 00:22:55.657 "trtype": "tcp", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "adrfam": "ipv4", 00:22:55.657 "trsvcid": "4420", 00:22:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:55.657 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:55.657 "hdgst": false, 00:22:55.657 "ddgst": false 00:22:55.657 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 },{ 00:22:55.657 "params": { 00:22:55.657 "name": "Nvme7", 00:22:55.657 "trtype": "tcp", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "adrfam": "ipv4", 00:22:55.657 "trsvcid": "4420", 00:22:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:55.657 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:55.657 "hdgst": false, 00:22:55.657 "ddgst": false 00:22:55.657 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 },{ 00:22:55.657 "params": { 00:22:55.657 "name": "Nvme8", 00:22:55.657 "trtype": "tcp", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "adrfam": "ipv4", 00:22:55.657 "trsvcid": "4420", 00:22:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:55.657 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:55.657 "hdgst": false, 00:22:55.657 "ddgst": false 00:22:55.657 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 },{ 00:22:55.657 "params": { 00:22:55.657 "name": "Nvme9", 00:22:55.657 "trtype": "tcp", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "adrfam": "ipv4", 00:22:55.657 "trsvcid": "4420", 00:22:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:55.657 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:55.657 "hdgst": false, 00:22:55.657 "ddgst": false 00:22:55.657 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 },{ 00:22:55.657 "params": { 00:22:55.657 "name": "Nvme10", 00:22:55.657 "trtype": "tcp", 00:22:55.657 "traddr": "10.0.0.2", 00:22:55.657 "adrfam": "ipv4", 00:22:55.657 "trsvcid": "4420", 00:22:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:55.657 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:55.657 "hdgst": false, 00:22:55.657 "ddgst": false 00:22:55.657 }, 00:22:55.657 "method": "bdev_nvme_attach_controller" 00:22:55.657 }' 00:22:55.657 [2024-12-09 11:37:47.680428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.657 [2024-12-09 11:37:47.716836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.041 Running I/O for 10 seconds... 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.041 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.042 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.302 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.302 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:57.302 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:57.302 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:57.562 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:57.562 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:57.562 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:57.562 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:57.562 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.563 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.563 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.563 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:57.563 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:57.563 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3592913 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3592913 ']' 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3592913 00:22:57.823 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3592913 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3592913' 00:22:57.824 killing process with pid 3592913 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3592913 00:22:57.824 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3592913 00:22:57.824 Received shutdown signal, test time was about 0.973886 seconds 00:22:57.824 00:22:57.824 Latency(us) 00:22:57.824 [2024-12-09T10:37:49.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme1n1 : 0.97 264.72 16.54 0.00 0.00 238704.43 23374.51 253405.87 00:22:57.824 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme2n1 : 0.95 201.29 12.58 0.00 0.00 307954.63 19551.57 267386.88 00:22:57.824 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme3n1 : 0.94 214.87 13.43 0.00 0.00 280026.22 5570.56 260396.37 00:22:57.824 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme4n1 : 0.93 210.78 13.17 0.00 0.00 278331.38 6772.05 230686.72 00:22:57.824 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme5n1 : 0.97 263.10 16.44 0.00 0.00 221456.43 22391.47 251658.24 00:22:57.824 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme6n1 : 0.97 264.98 16.56 0.00 0.00 214758.61 21080.75 253405.87 00:22:57.824 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme7n1 : 0.95 202.30 12.64 0.00 0.00 274769.35 19114.67 267386.88 00:22:57.824 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme8n1 : 0.96 271.20 16.95 0.00 0.00 200496.89 13380.27 234181.97 00:22:57.824 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme9n1 : 0.96 266.49 16.66 0.00 0.00 199246.93 18568.53 255153.49 00:22:57.824 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.824 Verification LBA range: start 0x0 length 0x400 00:22:57.824 Nvme10n1 : 0.94 214.82 13.43 0.00 0.00 237857.95 5543.25 241172.48 00:22:57.824 [2024-12-09T10:37:49.986Z] =================================================================================================================== 00:22:57.824 [2024-12-09T10:37:49.986Z] Total : 2374.55 148.41 0.00 0.00 241180.85 5543.25 267386.88 00:22:58.084 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3592533 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.027 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.027 rmmod nvme_tcp 00:22:59.027 rmmod nvme_fabrics 00:22:59.027 rmmod nvme_keyring 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3592533 ']' 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3592533 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3592533 ']' 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3592533 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3592533 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3592533' 00:22:59.287 killing process with pid 3592533 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3592533 00:22:59.287 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3592533 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.549 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.462 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.462 00:23:01.462 real 0m7.796s 00:23:01.462 user 0m23.400s 00:23:01.462 sys 0m1.250s 00:23:01.462 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.462 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.462 ************************************ 00:23:01.462 END TEST nvmf_shutdown_tc2 00:23:01.462 ************************************ 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:01.724 ************************************ 00:23:01.724 START TEST nvmf_shutdown_tc3 00:23:01.724 ************************************ 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.724 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:01.725 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:01.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:01.725 Found net devices under 0000:31:00.0: cvl_0_0 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:01.725 Found net devices under 0000:31:00.1: cvl_0_1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.725 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.987 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.987 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.988 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.988 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:23:01.988 00:23:01.988 --- 10.0.0.2 ping statistics --- 00:23:01.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.988 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:23:01.988 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:01.988 00:23:01.988 --- 10.0.0.1 ping statistics --- 00:23:01.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.988 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3594360 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3594360 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3594360 ']' 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.988 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.988 [2024-12-09 11:37:54.126834] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:01.988 [2024-12-09 11:37:54.126907] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.249 [2024-12-09 11:37:54.222450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.249 [2024-12-09 11:37:54.256436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.249 [2024-12-09 11:37:54.256467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.249 [2024-12-09 11:37:54.256473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.249 [2024-12-09 11:37:54.256478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.249 [2024-12-09 11:37:54.256482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.249 [2024-12-09 11:37:54.257761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.249 [2024-12-09 11:37:54.257920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.249 [2024-12-09 11:37:54.258081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:02.249 [2024-12-09 11:37:54.258258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:02.821 [2024-12-09 11:37:54.970216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.821 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.080 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.080 Malloc1 00:23:03.080 [2024-12-09 11:37:55.078804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.080 Malloc2 00:23:03.080 Malloc3 00:23:03.080 Malloc4 00:23:03.080 Malloc5 00:23:03.340 Malloc6 00:23:03.340 Malloc7 00:23:03.340 Malloc8 00:23:03.340 Malloc9 00:23:03.340 Malloc10 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3594587 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3594587 /var/tmp/bdevperf.sock 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3594587 ']' 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.340 { 00:23:03.340 "params": { 00:23:03.340 "name": "Nvme$subsystem", 00:23:03.340 "trtype": "$TEST_TRANSPORT", 00:23:03.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.340 "adrfam": "ipv4", 00:23:03.340 "trsvcid": "$NVMF_PORT", 00:23:03.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.340 "hdgst": ${hdgst:-false}, 00:23:03.340 "ddgst": ${ddgst:-false} 00:23:03.340 }, 00:23:03.340 "method": "bdev_nvme_attach_controller" 00:23:03.340 } 00:23:03.340 EOF 00:23:03.340 )") 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.340 { 00:23:03.340 "params": { 00:23:03.340 "name": "Nvme$subsystem", 00:23:03.340 "trtype": "$TEST_TRANSPORT", 00:23:03.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.340 "adrfam": "ipv4", 00:23:03.340 "trsvcid": "$NVMF_PORT", 00:23:03.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.340 "hdgst": ${hdgst:-false}, 00:23:03.340 "ddgst": ${ddgst:-false} 00:23:03.340 }, 00:23:03.340 "method": "bdev_nvme_attach_controller" 00:23:03.340 } 00:23:03.340 EOF 00:23:03.340 )") 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.340 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.340 { 00:23:03.340 "params": { 00:23:03.340 "name": "Nvme$subsystem", 00:23:03.340 "trtype": "$TEST_TRANSPORT", 00:23:03.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.341 "adrfam": "ipv4", 00:23:03.341 "trsvcid": "$NVMF_PORT", 00:23:03.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.341 "hdgst": ${hdgst:-false}, 00:23:03.341 "ddgst": ${ddgst:-false} 00:23:03.341 }, 00:23:03.341 "method": "bdev_nvme_attach_controller" 00:23:03.341 } 00:23:03.341 EOF 00:23:03.341 )") 00:23:03.341 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.601 { 00:23:03.601 "params": { 00:23:03.601 "name": "Nvme$subsystem", 00:23:03.601 "trtype": "$TEST_TRANSPORT", 00:23:03.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.601 "adrfam": "ipv4", 00:23:03.601 "trsvcid": "$NVMF_PORT", 00:23:03.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.601 "hdgst": ${hdgst:-false}, 00:23:03.601 "ddgst": ${ddgst:-false} 00:23:03.601 }, 00:23:03.601 "method": "bdev_nvme_attach_controller" 00:23:03.601 } 00:23:03.601 EOF 00:23:03.601 )") 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.601 { 00:23:03.601 "params": { 00:23:03.601 "name": "Nvme$subsystem", 00:23:03.601 "trtype": "$TEST_TRANSPORT", 00:23:03.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.601 "adrfam": "ipv4", 00:23:03.601 "trsvcid": "$NVMF_PORT", 00:23:03.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.601 "hdgst": ${hdgst:-false}, 00:23:03.601 "ddgst": ${ddgst:-false} 00:23:03.601 }, 00:23:03.601 "method": "bdev_nvme_attach_controller" 00:23:03.601 } 00:23:03.601 EOF 00:23:03.601 )") 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.601 { 00:23:03.601 "params": { 00:23:03.601 "name": "Nvme$subsystem", 00:23:03.601 "trtype": "$TEST_TRANSPORT", 00:23:03.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.601 "adrfam": "ipv4", 00:23:03.601 "trsvcid": "$NVMF_PORT", 00:23:03.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.601 "hdgst": ${hdgst:-false}, 00:23:03.601 "ddgst": ${ddgst:-false} 00:23:03.601 }, 00:23:03.601 "method": "bdev_nvme_attach_controller" 00:23:03.601 } 00:23:03.601 EOF 00:23:03.601 )") 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.601 [2024-12-09 11:37:55.524675] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:03.601 [2024-12-09 11:37:55.524730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594587 ] 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.601 { 00:23:03.601 "params": { 00:23:03.601 "name": "Nvme$subsystem", 00:23:03.601 "trtype": "$TEST_TRANSPORT", 00:23:03.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.601 "adrfam": "ipv4", 00:23:03.601 "trsvcid": "$NVMF_PORT", 00:23:03.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.601 "hdgst": ${hdgst:-false}, 00:23:03.601 "ddgst": ${ddgst:-false} 00:23:03.601 }, 00:23:03.601 "method": "bdev_nvme_attach_controller" 00:23:03.601 } 00:23:03.601 EOF 00:23:03.601 )") 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.601 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.601 { 00:23:03.601 "params": { 00:23:03.601 "name": "Nvme$subsystem", 00:23:03.601 "trtype": "$TEST_TRANSPORT", 00:23:03.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.601 "adrfam": "ipv4", 00:23:03.601 "trsvcid": "$NVMF_PORT", 00:23:03.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.602 "hdgst": ${hdgst:-false}, 00:23:03.602 "ddgst": ${ddgst:-false} 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 } 00:23:03.602 EOF 00:23:03.602 )") 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.602 { 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme$subsystem", 00:23:03.602 "trtype": "$TEST_TRANSPORT", 00:23:03.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "$NVMF_PORT", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.602 "hdgst": ${hdgst:-false}, 00:23:03.602 "ddgst": ${ddgst:-false} 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 } 00:23:03.602 EOF 00:23:03.602 )") 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.602 { 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme$subsystem", 00:23:03.602 "trtype": "$TEST_TRANSPORT", 00:23:03.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "$NVMF_PORT", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.602 "hdgst": ${hdgst:-false}, 00:23:03.602 "ddgst": ${ddgst:-false} 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 } 00:23:03.602 EOF 00:23:03.602 )") 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:03.602 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme1", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme2", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme3", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme4", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme5", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme6", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme7", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme8", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme9", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 },{ 00:23:03.602 "params": { 00:23:03.602 "name": "Nvme10", 00:23:03.602 "trtype": "tcp", 00:23:03.602 "traddr": "10.0.0.2", 00:23:03.602 "adrfam": "ipv4", 00:23:03.602 "trsvcid": "4420", 00:23:03.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.602 "hdgst": false, 00:23:03.602 "ddgst": false 00:23:03.602 }, 00:23:03.602 "method": "bdev_nvme_attach_controller" 00:23:03.602 }' 00:23:03.602 [2024-12-09 11:37:55.597547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.602 [2024-12-09 11:37:55.634178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.000 Running I/O for 10 seconds... 00:23:05.000 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.000 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:05.000 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:05.000 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.000 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:05.260 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:05.520 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:05.521 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3594360 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3594360 ']' 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3594360 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3594360 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3594360' 00:23:05.791 killing process with pid 3594360 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3594360 00:23:05.791 11:37:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3594360 00:23:05.791 [2024-12-09 11:37:57.898162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.791 [2024-12-09 11:37:57.898424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.898522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352c90 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.900597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca260 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.905315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.905327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.792 [2024-12-09 11:37:57.905332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.905626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353180 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.793 [2024-12-09 11:37:57.906836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.906998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353650 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.794 [2024-12-09 11:37:57.907951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.907996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2353b40 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.795 [2024-12-09 11:37:57.908836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.908904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354010 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.909525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354390 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354710 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354710 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354710 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.910917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354c00 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.911381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.911395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.911400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.911405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.796 [2024-12-09 11:37:57.911411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.911697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23550d0 is same with the state(6) to be set 00:23:05.797 [2024-12-09 11:37:57.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-12-09 11:37:57.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.797 [2024-12-09 11:37:57.916615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.916988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.916998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-12-09 11:37:57.917308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.798 [2024-12-09 11:37:57.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-12-09 11:37:57.917484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:05.799 [2024-12-09 11:37:57.917731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9610 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.917833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2351750 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.917925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.917980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.917987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233db80 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.918018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2351570 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.918108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23071c0 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.918196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e9b0 is same with the state(6) to be set 00:23:05.799 [2024-12-09 11:37:57.918283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-12-09 11:37:57.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-12-09 11:37:57.918332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecdb10 is same with the state(6) to be set 00:23:05.800 [2024-12-09 11:37:57.918370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eda230 is same with the state(6) to be set 00:23:05.800 [2024-12-09 11:37:57.918457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eda430 is same with the state(6) to be set 00:23:05.800 [2024-12-09 11:37:57.918540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.800 [2024-12-09 11:37:57.918597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf40 is same with the state(6) to be set 00:23:05.800 [2024-12-09 11:37:57.918860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.918984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.918994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.800 [2024-12-09 11:37:57.919217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-12-09 11:37:57.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.919373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.919383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.927985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.927995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-12-09 11:37:57.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-12-09 11:37:57.928353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.928362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.928370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.928380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.928387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.928396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.928404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.929875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:05.802 [2024-12-09 11:37:57.929912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9610 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.929954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351750 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.929970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233db80 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.929985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351570 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.929998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23071c0 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e9b0 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecdb10 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eda230 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eda430 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecaf40 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.930148] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.932253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:05.802 [2024-12-09 11:37:57.932499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.802 [2024-12-09 11:37:57.932518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df9610 with addr=10.0.0.2, port=4420 00:23:05.802 [2024-12-09 11:37:57.932527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9610 is same with the state(6) to be set 00:23:05.802 [2024-12-09 11:37:57.932990] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933044] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933089] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.802 [2024-12-09 11:37:57.933326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e9b0 with addr=10.0.0.2, port=4420 00:23:05.802 [2024-12-09 11:37:57.933335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e9b0 is same with the state(6) to be set 00:23:05.802 [2024-12-09 11:37:57.933346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9610 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.933387] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933461] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933494] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:05.802 [2024-12-09 11:37:57.933554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e9b0 (9): Bad file descriptor 00:23:05.802 [2024-12-09 11:37:57.933567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:05.802 [2024-12-09 11:37:57.933574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:05.802 [2024-12-09 11:37:57.933583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:05.802 [2024-12-09 11:37:57.933592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:05.802 [2024-12-09 11:37:57.933639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.933987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.933998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.934023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.934031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.934040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.934048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.934057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-12-09 11:37:57.934064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-12-09 11:37:57.934075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.803 [2024-12-09 11:37:57.934756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-12-09 11:37:57.934764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e9860 is same with the state(6) to be set 00:23:05.803 [2024-12-09 11:37:57.934857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:05.803 [2024-12-09 11:37:57.934866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:05.804 [2024-12-09 11:37:57.934874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:05.804 [2024-12-09 11:37:57.934881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:05.804 [2024-12-09 11:37:57.936141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:05.804 [2024-12-09 11:37:57.936402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.804 [2024-12-09 11:37:57.936418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecaf40 with addr=10.0.0.2, port=4420 00:23:05.804 [2024-12-09 11:37:57.936428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf40 is same with the state(6) to be set 00:23:05.804 [2024-12-09 11:37:57.936731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecaf40 (9): Bad file descriptor 00:23:05.804 [2024-12-09 11:37:57.936780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:05.804 [2024-12-09 11:37:57.936787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:05.804 [2024-12-09 11:37:57.936795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:05.804 [2024-12-09 11:37:57.936802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:05.804 [2024-12-09 11:37:57.940046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.804 [2024-12-09 11:37:57.940578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.804 [2024-12-09 11:37:57.940588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.940989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.940996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.805 [2024-12-09 11:37:57.941158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.805 [2024-12-09 11:37:57.941167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e7800 is same with the state(6) to be set 00:23:06.073 [2024-12-09 11:37:57.942442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.073 [2024-12-09 11:37:57.942582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.073 [2024-12-09 11:37:57.942590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.942987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.942997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.074 [2024-12-09 11:37:57.943274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.074 [2024-12-09 11:37:57.943282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.943558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.943566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e8790 is same with the state(6) to be set 00:23:06.075 [2024-12-09 11:37:57.944835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.944984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.944993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.075 [2024-12-09 11:37:57.945242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.075 [2024-12-09 11:37:57.945250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.076 [2024-12-09 11:37:57.945925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.076 [2024-12-09 11:37:57.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.945952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.945959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.945967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e4a50 is same with the state(6) to be set 00:23:06.077 [2024-12-09 11:37:57.947232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.077 [2024-12-09 11:37:57.947903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-12-09 11:37:57.947912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.947920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.947929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.947937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.947946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.947954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.947964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.947971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.947981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.947989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.947998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.948368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.948377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e5cd0 is same with the state(6) to be set 00:23:06.078 [2024-12-09 11:37:57.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.078 [2024-12-09 11:37:57.949790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.078 [2024-12-09 11:37:57.949797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.949987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.079 [2024-12-09 11:37:57.950481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.079 [2024-12-09 11:37:57.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.950763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.950772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e95b0 is same with the state(6) to be set 00:23:06.080 [2024-12-09 11:37:57.952048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.080 [2024-12-09 11:37:57.952485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.080 [2024-12-09 11:37:57.952492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.952991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-12-09 11:37:57.953203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.081 [2024-12-09 11:37:57.953212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea870 is same with the state(6) to be set 00:23:06.082 [2024-12-09 11:37:57.954478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.954989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.955007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.955017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.955034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.955044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.955051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.955061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-12-09 11:37:57.955068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.082 [2024-12-09 11:37:57.955078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-12-09 11:37:57.955584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.083 [2024-12-09 11:37:57.955592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21240e0 is same with the state(6) to be set 00:23:06.083 [2024-12-09 11:37:57.957773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:06.083 [2024-12-09 11:37:57.957801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:06.083 [2024-12-09 11:37:57.957814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:06.083 [2024-12-09 11:37:57.957828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:06.083 [2024-12-09 11:37:57.957909] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:06.083 [2024-12-09 11:37:57.957924] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:06.083 [2024-12-09 11:37:57.957938] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:06.083 [2024-12-09 11:37:57.958028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:06.083 [2024-12-09 11:37:57.958040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:06.083 task offset: 27392 on job bdev=Nvme7n1 fails 00:23:06.083 00:23:06.083 Latency(us) 00:23:06.083 [2024-12-09T10:37:58.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.083 Job: Nvme1n1 ended in about 0.97 seconds with error 00:23:06.083 Verification LBA range: start 0x0 length 0x400 00:23:06.083 Nvme1n1 : 0.97 131.65 8.23 65.83 0.00 320588.23 18459.31 262144.00 00:23:06.083 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.083 Job: Nvme2n1 ended in about 0.97 seconds with error 00:23:06.083 Verification LBA range: start 0x0 length 0x400 00:23:06.083 Nvme2n1 : 0.97 196.99 12.31 65.66 0.00 236257.92 15728.64 260396.37 00:23:06.083 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.083 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:06.083 Verification LBA range: start 0x0 length 0x400 00:23:06.083 Nvme3n1 : 0.97 198.76 12.42 66.25 0.00 229310.51 16274.77 249910.61 00:23:06.083 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.083 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:06.083 Verification LBA range: start 0x0 length 0x400 00:23:06.083 Nvme4n1 : 0.98 196.51 12.28 65.50 0.00 227307.63 10485.76 253405.87 00:23:06.083 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.083 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:06.083 Verification LBA range: start 0x0 length 0x400 00:23:06.083 Nvme5n1 : 0.98 134.77 8.42 65.34 0.00 291687.86 18786.99 276125.01 00:23:06.084 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.084 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:06.084 Verification LBA range: start 0x0 length 0x400 00:23:06.084 Nvme6n1 : 0.96 197.63 12.35 66.57 0.00 215701.60 14854.83 248162.99 00:23:06.084 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.084 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:06.084 Verification LBA range: start 0x0 length 0x400 00:23:06.084 Nvme7n1 : 0.96 200.09 12.51 66.70 0.00 208771.84 12451.84 246415.36 00:23:06.084 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.084 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:06.084 Verification LBA range: start 0x0 length 0x400 00:23:06.084 Nvme8n1 : 0.98 195.55 12.22 65.18 0.00 209564.59 20206.93 265639.25 00:23:06.084 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.084 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:06.084 Verification LBA range: start 0x0 length 0x400 00:23:06.084 Nvme9n1 : 0.98 199.13 12.45 65.02 0.00 202292.62 26214.40 204472.32 00:23:06.084 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.084 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:06.084 Verification LBA range: start 0x0 length 0x400 00:23:06.084 Nvme10n1 : 0.99 129.73 8.11 64.87 0.00 268481.71 21954.56 246415.36 00:23:06.084 [2024-12-09T10:37:58.246Z] =================================================================================================================== 00:23:06.084 [2024-12-09T10:37:58.246Z] Total : 1780.82 111.30 656.93 0.00 236784.72 10485.76 276125.01 00:23:06.084 [2024-12-09 11:37:57.984976] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:06.084 [2024-12-09 11:37:57.985005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:06.084 [2024-12-09 11:37:57.985486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.985502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecdb10 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.985512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecdb10 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.985678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.985688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eda430 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.985695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eda430 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.986026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.986037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eda230 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.986044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eda230 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.986334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.986343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23071c0 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.986351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23071c0 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.988216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:06.084 [2024-12-09 11:37:57.988229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:06.084 [2024-12-09 11:37:57.988487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.988503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2351570 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.988511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2351570 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.988841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.988851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2351750 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.988858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2351750 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.989201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.989211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233db80 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.989219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233db80 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.989232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecdb10 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.989248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eda430 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.989258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eda230 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.989267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23071c0 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.989295] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:06.084 [2024-12-09 11:37:57.989311] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:06.084 [2024-12-09 11:37:57.989322] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:06.084 [2024-12-09 11:37:57.989333] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:06.084 [2024-12-09 11:37:57.989345] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:06.084 [2024-12-09 11:37:57.989417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:06.084 [2024-12-09 11:37:57.989792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.989805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df9610 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.989812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9610 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.990201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.990213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e9b0 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.990220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e9b0 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.990229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351570 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.990238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2351750 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.990247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233db80 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.990256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.084 [2024-12-09 11:37:57.990824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ecaf40 with addr=10.0.0.2, port=4420 00:23:06.084 [2024-12-09 11:37:57.990831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf40 is same with the state(6) to be set 00:23:06.084 [2024-12-09 11:37:57.990840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9610 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.990850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e9b0 (9): Bad file descriptor 00:23:06.084 [2024-12-09 11:37:57.990858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:06.084 [2024-12-09 11:37:57.990893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:06.084 [2024-12-09 11:37:57.990900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:06.084 [2024-12-09 11:37:57.990906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:06.084 [2024-12-09 11:37:57.990914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:06.085 [2024-12-09 11:37:57.990920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:06.085 [2024-12-09 11:37:57.990927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:06.085 [2024-12-09 11:37:57.990933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:06.085 [2024-12-09 11:37:57.990961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecaf40 (9): Bad file descriptor 00:23:06.085 [2024-12-09 11:37:57.990971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:06.085 [2024-12-09 11:37:57.990977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:06.085 [2024-12-09 11:37:57.990985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:06.085 [2024-12-09 11:37:57.990991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:06.085 [2024-12-09 11:37:57.990998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:06.085 [2024-12-09 11:37:57.991007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:06.085 [2024-12-09 11:37:57.991018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:06.085 [2024-12-09 11:37:57.991025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:06.085 [2024-12-09 11:37:57.991053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:06.085 [2024-12-09 11:37:57.991061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:06.085 [2024-12-09 11:37:57.991068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:06.085 [2024-12-09 11:37:57.991074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:06.085 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3594587 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3594587 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3594587 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:07.025 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.026 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.026 rmmod nvme_tcp 00:23:07.287 rmmod nvme_fabrics 00:23:07.287 rmmod nvme_keyring 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3594360 ']' 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3594360 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3594360 ']' 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3594360 00:23:07.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3594360) - No such process 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3594360 is not found' 00:23:07.287 Process with pid 3594360 is not found 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.287 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.199 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.199 00:23:09.199 real 0m7.663s 00:23:09.199 user 0m18.372s 00:23:09.199 sys 0m1.267s 00:23:09.199 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.199 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.199 ************************************ 00:23:09.199 END TEST nvmf_shutdown_tc3 00:23:09.199 ************************************ 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.461 ************************************ 00:23:09.461 START TEST nvmf_shutdown_tc4 00:23:09.461 ************************************ 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.461 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.462 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.462 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.462 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:23:09.722 00:23:09.722 --- 10.0.0.2 ping statistics --- 00:23:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.722 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:09.722 00:23:09.722 --- 10.0.0.1 ping statistics --- 00:23:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.722 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3595897 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3595897 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3595897 ']' 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:09.722 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.722 [2024-12-09 11:38:01.836569] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:09.722 [2024-12-09 11:38:01.836633] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.984 [2024-12-09 11:38:01.933947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.984 [2024-12-09 11:38:01.967727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.984 [2024-12-09 11:38:01.967757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.984 [2024-12-09 11:38:01.967763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.984 [2024-12-09 11:38:01.967768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.984 [2024-12-09 11:38:01.967773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.984 [2024-12-09 11:38:01.969352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.984 [2024-12-09 11:38:01.969509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.984 [2024-12-09 11:38:01.969665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.984 [2024-12-09 11:38:01.969666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:10.552 [2024-12-09 11:38:02.677471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.552 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.813 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:10.813 Malloc1 00:23:10.813 [2024-12-09 11:38:02.792727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.813 Malloc2 00:23:10.813 Malloc3 00:23:10.813 Malloc4 00:23:10.813 Malloc5 00:23:10.813 Malloc6 00:23:11.074 Malloc7 00:23:11.074 Malloc8 00:23:11.074 Malloc9 00:23:11.074 Malloc10 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3596280 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:11.074 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:11.335 [2024-12-09 11:38:03.254561] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3595897 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3595897 ']' 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3595897 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3595897 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3595897' 00:23:16.626 killing process with pid 3595897 00:23:16.626 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3595897 00:23:16.627 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3595897 00:23:16.627 [2024-12-09 11:38:08.270208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69620 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69620 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69620 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69af0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69af0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69af0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69af0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69fc0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69fc0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.270922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69fc0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.271659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69150 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0a90 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0a90 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0a90 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0a90 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0f80 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0f80 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0f80 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0f80 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0f80 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1470 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1470 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1470 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1470 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1470 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 [2024-12-09 11:38:08.274914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb05c0 is same with the state(6) to be set 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 [2024-12-09 11:38:08.275517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 [2024-12-09 11:38:08.276432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.627 Write completed with error (sct=0, sc=8) 00:23:16.627 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 [2024-12-09 11:38:08.276924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fafc00 is same with starting I/O failed: -6 00:23:16.628 the state(6) to be set 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 [2024-12-09 11:38:08.276940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fafc00 is same with the state(6) to be set 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 [2024-12-09 11:38:08.277121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 [2024-12-09 11:38:08.277136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 starting I/O failed: -6 00:23:16.628 [2024-12-09 11:38:08.277141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 [2024-12-09 11:38:08.277146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 [2024-12-09 11:38:08.277151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 starting I/O failed: -6 00:23:16.628 [2024-12-09 11:38:08.277156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 [2024-12-09 11:38:08.277161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb00d0 is same with the state(6) to be set 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.628 starting I/O failed: -6 00:23:16.628 [2024-12-09 11:38:08.278744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.628 NVMe io qpair process completion error 00:23:16.628 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 [2024-12-09 11:38:08.279963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.629 starting I/O failed: -6 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 [2024-12-09 11:38:08.280889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 [2024-12-09 11:38:08.281813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.629 Write completed with error (sct=0, sc=8) 00:23:16.629 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 [2024-12-09 11:38:08.283436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.630 NVMe io qpair process completion error 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 [2024-12-09 11:38:08.284569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.630 starting I/O failed: -6 00:23:16.630 starting I/O failed: -6 00:23:16.630 starting I/O failed: -6 00:23:16.630 starting I/O failed: -6 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 [2024-12-09 11:38:08.285588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.630 Write completed with error (sct=0, sc=8) 00:23:16.630 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 [2024-12-09 11:38:08.286515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 Write completed with error (sct=0, sc=8) 00:23:16.631 starting I/O failed: -6 00:23:16.631 [2024-12-09 11:38:08.289047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.631 NVMe io qpair process completion error 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 [2024-12-09 11:38:08.290245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 [2024-12-09 11:38:08.292325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.632 starting I/O failed: -6 00:23:16.632 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 [2024-12-09 11:38:08.294026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.633 NVMe io qpair process completion error 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 [2024-12-09 11:38:08.295002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 [2024-12-09 11:38:08.295812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.633 Write completed with error (sct=0, sc=8) 00:23:16.633 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 [2024-12-09 11:38:08.296751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 [2024-12-09 11:38:08.298229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.634 NVMe io qpair process completion error 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 starting I/O failed: -6 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.634 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 [2024-12-09 11:38:08.299457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 [2024-12-09 11:38:08.300278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 [2024-12-09 11:38:08.301207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.635 starting I/O failed: -6 00:23:16.635 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 [2024-12-09 11:38:08.303992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.636 NVMe io qpair process completion error 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 [2024-12-09 11:38:08.305186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.636 starting I/O failed: -6 00:23:16.636 starting I/O failed: -6 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.636 Write completed with error (sct=0, sc=8) 00:23:16.636 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 [2024-12-09 11:38:08.306174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 [2024-12-09 11:38:08.307109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 [2024-12-09 11:38:08.308759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.637 NVMe io qpair process completion error 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.637 starting I/O failed: -6 00:23:16.637 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 [2024-12-09 11:38:08.309788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 [2024-12-09 11:38:08.310691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 [2024-12-09 11:38:08.311608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.638 Write completed with error (sct=0, sc=8) 00:23:16.638 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 [2024-12-09 11:38:08.314536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.639 NVMe io qpair process completion error 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 [2024-12-09 11:38:08.315861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.639 starting I/O failed: -6 00:23:16.639 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 [2024-12-09 11:38:08.316696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 [2024-12-09 11:38:08.317632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.640 Write completed with error (sct=0, sc=8) 00:23:16.640 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 [2024-12-09 11:38:08.319084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:16.641 NVMe io qpair process completion error 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 [2024-12-09 11:38:08.320694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.641 Write completed with error (sct=0, sc=8) 00:23:16.641 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 [2024-12-09 11:38:08.321720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 Write completed with error (sct=0, sc=8) 00:23:16.642 starting I/O failed: -6 00:23:16.642 [2024-12-09 11:38:08.324472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:16.642 NVMe io qpair process completion error 00:23:16.642 Initializing NVMe Controllers 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:16.642 Controller IO queue size 128, less than required. 00:23:16.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:16.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:16.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:16.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:16.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:16.643 Initialization complete. Launching workers. 00:23:16.643 ======================================================== 00:23:16.643 Latency(us) 00:23:16.643 Device Information : IOPS MiB/s Average min max 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1881.19 80.83 68061.02 783.64 135886.82 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1850.82 79.53 69191.60 910.39 137724.87 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1873.49 80.50 67680.51 568.21 150546.07 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1869.33 80.32 67852.01 621.22 120088.62 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1872.04 80.44 67777.06 520.17 122779.43 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1859.14 79.88 68287.92 376.98 120530.67 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1892.01 81.30 67118.52 841.66 118070.88 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1906.78 81.93 66617.73 714.20 128143.78 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1864.34 80.11 68172.94 635.63 130869.48 00:23:16.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1738.90 74.72 73112.50 932.63 119885.53 00:23:16.643 ======================================================== 00:23:16.643 Total : 18608.05 799.56 68347.81 376.98 150546.07 00:23:16.643 00:23:16.643 [2024-12-09 11:38:08.329406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d380 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0c6c0 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e360 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0c390 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0c9f0 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0c060 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d9e0 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d050 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d6b0 is same with the state(6) to be set 00:23:16.643 [2024-12-09 11:38:08.329675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0e540 is same with the state(6) to be set 00:23:16.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:16.643 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3596280 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3596280 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3596280 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.586 rmmod nvme_tcp 00:23:17.586 rmmod nvme_fabrics 00:23:17.586 rmmod nvme_keyring 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3595897 ']' 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3595897 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3595897 ']' 00:23:17.586 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3595897 00:23:17.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3595897) - No such process 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3595897 is not found' 00:23:17.587 Process with pid 3595897 is not found 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.587 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.136 00:23:20.136 real 0m10.268s 00:23:20.136 user 0m27.930s 00:23:20.136 sys 0m4.001s 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:20.136 ************************************ 00:23:20.136 END TEST nvmf_shutdown_tc4 00:23:20.136 ************************************ 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:20.136 00:23:20.136 real 0m43.176s 00:23:20.136 user 1m43.734s 00:23:20.136 sys 0m13.854s 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:20.136 ************************************ 00:23:20.136 END TEST nvmf_shutdown 00:23:20.136 ************************************ 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:20.136 ************************************ 00:23:20.136 START TEST nvmf_nsid 00:23:20.136 ************************************ 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:20.136 * Looking for test storage... 00:23:20.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.136 --rc genhtml_branch_coverage=1 00:23:20.136 --rc genhtml_function_coverage=1 00:23:20.136 --rc genhtml_legend=1 00:23:20.136 --rc geninfo_all_blocks=1 00:23:20.136 --rc geninfo_unexecuted_blocks=1 00:23:20.136 00:23:20.136 ' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.136 --rc genhtml_branch_coverage=1 00:23:20.136 --rc genhtml_function_coverage=1 00:23:20.136 --rc genhtml_legend=1 00:23:20.136 --rc geninfo_all_blocks=1 00:23:20.136 --rc geninfo_unexecuted_blocks=1 00:23:20.136 00:23:20.136 ' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.136 --rc genhtml_branch_coverage=1 00:23:20.136 --rc genhtml_function_coverage=1 00:23:20.136 --rc genhtml_legend=1 00:23:20.136 --rc geninfo_all_blocks=1 00:23:20.136 --rc geninfo_unexecuted_blocks=1 00:23:20.136 00:23:20.136 ' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.136 --rc genhtml_branch_coverage=1 00:23:20.136 --rc genhtml_function_coverage=1 00:23:20.136 --rc genhtml_legend=1 00:23:20.136 --rc geninfo_all_blocks=1 00:23:20.136 --rc geninfo_unexecuted_blocks=1 00:23:20.136 00:23:20.136 ' 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.136 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.136 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.137 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:28.279 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:28.279 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:28.279 Found net devices under 0000:31:00.0: cvl_0_0 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:28.279 Found net devices under 0000:31:00.1: cvl_0_1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:23:28.279 00:23:28.279 --- 10.0.0.2 ping statistics --- 00:23:28.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.279 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:23:28.279 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:23:28.279 00:23:28.279 --- 10.0.0.1 ping statistics --- 00:23:28.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.280 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3601694 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3601694 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3601694 ']' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 [2024-12-09 11:38:19.441774] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:28.280 [2024-12-09 11:38:19.441833] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.280 [2024-12-09 11:38:19.524764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.280 [2024-12-09 11:38:19.565455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.280 [2024-12-09 11:38:19.565491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.280 [2024-12-09 11:38:19.565500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.280 [2024-12-09 11:38:19.565507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.280 [2024-12-09 11:38:19.565513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.280 [2024-12-09 11:38:19.566121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3601717 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3fe43709-3d78-4878-8fae-01843f33cd5b 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1808086f-c38b-4d5a-9dc1-8342611d5f56 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d8aa00c9-f93b-43d0-91dc-97853e8ebd13 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 null0 00:23:28.280 null1 00:23:28.280 null2 00:23:28.280 [2024-12-09 11:38:19.761074] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:28.280 [2024-12-09 11:38:19.761124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601717 ] 00:23:28.280 [2024-12-09 11:38:19.763656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.280 [2024-12-09 11:38:19.787854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3601717 /var/tmp/tgt2.sock 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3601717 ']' 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:28.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.280 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 [2024-12-09 11:38:19.847642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.280 [2024-12-09 11:38:19.883957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.280 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.280 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:28.280 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:28.280 [2024-12-09 11:38:20.371117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.280 [2024-12-09 11:38:20.387238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:28.280 nvme0n1 nvme0n2 00:23:28.280 nvme1n1 00:23:28.541 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:28.541 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:28.541 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:29.924 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3fe43709-3d78-4878-8fae-01843f33cd5b 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3fe437093d7848788fae01843f33cd5b 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3FE437093D7848788FAE01843F33CD5B 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3FE437093D7848788FAE01843F33CD5B == \3\F\E\4\3\7\0\9\3\D\7\8\4\8\7\8\8\F\A\E\0\1\8\4\3\F\3\3\C\D\5\B ]] 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1808086f-c38b-4d5a-9dc1-8342611d5f56 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:30.865 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1808086fc38b4d5a9dc18342611d5f56 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1808086FC38B4D5A9DC18342611D5F56 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1808086FC38B4D5A9DC18342611D5F56 == \1\8\0\8\0\8\6\F\C\3\8\B\4\D\5\A\9\D\C\1\8\3\4\2\6\1\1\D\5\F\5\6 ]] 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d8aa00c9-f93b-43d0-91dc-97853e8ebd13 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d8aa00c9f93b43d091dc97853e8ebd13 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D8AA00C9F93B43D091DC97853E8EBD13 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D8AA00C9F93B43D091DC97853E8EBD13 == \D\8\A\A\0\0\C\9\F\9\3\B\4\3\D\0\9\1\D\C\9\7\8\5\3\E\8\E\B\D\1\3 ]] 00:23:31.125 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3601717 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3601717 ']' 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3601717 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3601717 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3601717' 00:23:31.387 killing process with pid 3601717 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3601717 00:23:31.387 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3601717 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.649 rmmod nvme_tcp 00:23:31.649 rmmod nvme_fabrics 00:23:31.649 rmmod nvme_keyring 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3601694 ']' 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3601694 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3601694 ']' 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3601694 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3601694 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3601694' 00:23:31.649 killing process with pid 3601694 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3601694 00:23:31.649 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3601694 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.911 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.823 00:23:33.823 real 0m14.120s 00:23:33.823 user 0m10.454s 00:23:33.823 sys 0m6.713s 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:33.823 ************************************ 00:23:33.823 END TEST nvmf_nsid 00:23:33.823 ************************************ 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:33.823 00:23:33.823 real 13m4.399s 00:23:33.823 user 27m20.392s 00:23:33.823 sys 3m52.061s 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.823 11:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.823 ************************************ 00:23:33.823 END TEST nvmf_target_extra 00:23:33.823 ************************************ 00:23:34.083 11:38:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:34.083 11:38:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.083 11:38:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.083 11:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:34.083 ************************************ 00:23:34.083 START TEST nvmf_host 00:23:34.083 ************************************ 00:23:34.083 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:34.083 * Looking for test storage... 00:23:34.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:34.083 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:34.083 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:34.083 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.084 --rc genhtml_branch_coverage=1 00:23:34.084 --rc genhtml_function_coverage=1 00:23:34.084 --rc genhtml_legend=1 00:23:34.084 --rc geninfo_all_blocks=1 00:23:34.084 --rc geninfo_unexecuted_blocks=1 00:23:34.084 00:23:34.084 ' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.084 --rc genhtml_branch_coverage=1 00:23:34.084 --rc genhtml_function_coverage=1 00:23:34.084 --rc genhtml_legend=1 00:23:34.084 --rc geninfo_all_blocks=1 00:23:34.084 --rc geninfo_unexecuted_blocks=1 00:23:34.084 00:23:34.084 ' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.084 --rc genhtml_branch_coverage=1 00:23:34.084 --rc genhtml_function_coverage=1 00:23:34.084 --rc genhtml_legend=1 00:23:34.084 --rc geninfo_all_blocks=1 00:23:34.084 --rc geninfo_unexecuted_blocks=1 00:23:34.084 00:23:34.084 ' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.084 --rc genhtml_branch_coverage=1 00:23:34.084 --rc genhtml_function_coverage=1 00:23:34.084 --rc genhtml_legend=1 00:23:34.084 --rc geninfo_all_blocks=1 00:23:34.084 --rc geninfo_unexecuted_blocks=1 00:23:34.084 00:23:34.084 ' 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.084 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.346 ************************************ 00:23:34.346 START TEST nvmf_multicontroller 00:23:34.346 ************************************ 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:34.346 * Looking for test storage... 00:23:34.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.346 --rc genhtml_branch_coverage=1 00:23:34.346 --rc genhtml_function_coverage=1 00:23:34.346 --rc genhtml_legend=1 00:23:34.346 --rc geninfo_all_blocks=1 00:23:34.346 --rc geninfo_unexecuted_blocks=1 00:23:34.346 00:23:34.346 ' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.346 --rc genhtml_branch_coverage=1 00:23:34.346 --rc genhtml_function_coverage=1 00:23:34.346 --rc genhtml_legend=1 00:23:34.346 --rc geninfo_all_blocks=1 00:23:34.346 --rc geninfo_unexecuted_blocks=1 00:23:34.346 00:23:34.346 ' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.346 --rc genhtml_branch_coverage=1 00:23:34.346 --rc genhtml_function_coverage=1 00:23:34.346 --rc genhtml_legend=1 00:23:34.346 --rc geninfo_all_blocks=1 00:23:34.346 --rc geninfo_unexecuted_blocks=1 00:23:34.346 00:23:34.346 ' 00:23:34.346 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.346 --rc genhtml_branch_coverage=1 00:23:34.346 --rc genhtml_function_coverage=1 00:23:34.346 --rc genhtml_legend=1 00:23:34.346 --rc geninfo_all_blocks=1 00:23:34.347 --rc geninfo_unexecuted_blocks=1 00:23:34.347 00:23:34.347 ' 00:23:34.347 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.609 11:38:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:42.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.759 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:42.760 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:42.760 Found net devices under 0000:31:00.0: cvl_0_0 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:42.760 Found net devices under 0000:31:00.1: cvl_0_1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.760 11:38:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:23:42.760 00:23:42.760 --- 10.0.0.2 ping statistics --- 00:23:42.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.760 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:42.760 00:23:42.760 --- 10.0.0.1 ping statistics --- 00:23:42.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.760 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3606884 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3606884 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3606884 ']' 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.760 [2024-12-09 11:38:34.085380] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:42.760 [2024-12-09 11:38:34.085419] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.760 [2024-12-09 11:38:34.173727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.760 [2024-12-09 11:38:34.213416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.760 [2024-12-09 11:38:34.213457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.760 [2024-12-09 11:38:34.213465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.760 [2024-12-09 11:38:34.213472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.760 [2024-12-09 11:38:34.213478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.760 [2024-12-09 11:38:34.214940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.760 [2024-12-09 11:38:34.215075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.760 [2024-12-09 11:38:34.215313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.760 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 [2024-12-09 11:38:34.945408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 Malloc0 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 [2024-12-09 11:38:35.002461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 [2024-12-09 11:38:35.010403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 Malloc1 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3607233 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3607233 /var/tmp/bdevperf.sock 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3607233 ']' 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.022 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.964 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.964 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:43.964 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:43.964 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.964 11:38:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.225 NVMe0n1 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.225 1 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:44.225 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.226 request: 00:23:44.226 { 00:23:44.226 "name": "NVMe0", 00:23:44.226 "trtype": "tcp", 00:23:44.226 "traddr": "10.0.0.2", 00:23:44.226 "adrfam": "ipv4", 00:23:44.226 "trsvcid": "4420", 00:23:44.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.226 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:44.226 "hostaddr": "10.0.0.1", 00:23:44.226 "prchk_reftag": false, 00:23:44.226 "prchk_guard": false, 00:23:44.226 "hdgst": false, 00:23:44.226 "ddgst": false, 00:23:44.226 "allow_unrecognized_csi": false, 00:23:44.226 "method": "bdev_nvme_attach_controller", 00:23:44.226 "req_id": 1 00:23:44.226 } 00:23:44.226 Got JSON-RPC error response 00:23:44.226 response: 00:23:44.226 { 00:23:44.226 "code": -114, 00:23:44.226 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:44.226 } 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.226 request: 00:23:44.226 { 00:23:44.226 "name": "NVMe0", 00:23:44.226 "trtype": "tcp", 00:23:44.226 "traddr": "10.0.0.2", 00:23:44.226 "adrfam": "ipv4", 00:23:44.226 "trsvcid": "4420", 00:23:44.226 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.226 "hostaddr": "10.0.0.1", 00:23:44.226 "prchk_reftag": false, 00:23:44.226 "prchk_guard": false, 00:23:44.226 "hdgst": false, 00:23:44.226 "ddgst": false, 00:23:44.226 "allow_unrecognized_csi": false, 00:23:44.226 "method": "bdev_nvme_attach_controller", 00:23:44.226 "req_id": 1 00:23:44.226 } 00:23:44.226 Got JSON-RPC error response 00:23:44.226 response: 00:23:44.226 { 00:23:44.226 "code": -114, 00:23:44.226 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:44.226 } 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.226 request: 00:23:44.226 { 00:23:44.226 "name": "NVMe0", 00:23:44.226 "trtype": "tcp", 00:23:44.226 "traddr": "10.0.0.2", 00:23:44.226 "adrfam": "ipv4", 00:23:44.226 "trsvcid": "4420", 00:23:44.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.226 "hostaddr": "10.0.0.1", 00:23:44.226 "prchk_reftag": false, 00:23:44.226 "prchk_guard": false, 00:23:44.226 "hdgst": false, 00:23:44.226 "ddgst": false, 00:23:44.226 "multipath": "disable", 00:23:44.226 "allow_unrecognized_csi": false, 00:23:44.226 "method": "bdev_nvme_attach_controller", 00:23:44.226 "req_id": 1 00:23:44.226 } 00:23:44.226 Got JSON-RPC error response 00:23:44.226 response: 00:23:44.226 { 00:23:44.226 "code": -114, 00:23:44.226 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:44.226 } 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.226 request: 00:23:44.226 { 00:23:44.226 "name": "NVMe0", 00:23:44.226 "trtype": "tcp", 00:23:44.226 "traddr": "10.0.0.2", 00:23:44.226 "adrfam": "ipv4", 00:23:44.226 "trsvcid": "4420", 00:23:44.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.226 "hostaddr": "10.0.0.1", 00:23:44.226 "prchk_reftag": false, 00:23:44.226 "prchk_guard": false, 00:23:44.226 "hdgst": false, 00:23:44.226 "ddgst": false, 00:23:44.226 "multipath": "failover", 00:23:44.226 "allow_unrecognized_csi": false, 00:23:44.226 "method": "bdev_nvme_attach_controller", 00:23:44.226 "req_id": 1 00:23:44.226 } 00:23:44.226 Got JSON-RPC error response 00:23:44.226 response: 00:23:44.226 { 00:23:44.226 "code": -114, 00:23:44.226 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:44.226 } 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.226 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.487 NVMe0n1 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.488 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:44.488 11:38:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.873 { 00:23:45.873 "results": [ 00:23:45.873 { 00:23:45.873 "job": "NVMe0n1", 00:23:45.873 "core_mask": "0x1", 00:23:45.873 "workload": "write", 00:23:45.873 "status": "finished", 00:23:45.873 "queue_depth": 128, 00:23:45.873 "io_size": 4096, 00:23:45.873 "runtime": 1.006218, 00:23:45.873 "iops": 24932.96681236074, 00:23:45.873 "mibps": 97.39440161078414, 00:23:45.873 "io_failed": 0, 00:23:45.873 "io_timeout": 0, 00:23:45.873 "avg_latency_us": 5122.355102040816, 00:23:45.873 "min_latency_us": 2143.5733333333333, 00:23:45.873 "max_latency_us": 12615.68 00:23:45.873 } 00:23:45.873 ], 00:23:45.873 "core_count": 1 00:23:45.873 } 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3607233 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3607233 ']' 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3607233 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3607233 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3607233' 00:23:45.873 killing process with pid 3607233 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3607233 00:23:45.873 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3607233 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:45.874 11:38:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:45.874 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.874 [2024-12-09 11:38:35.120865] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:45.874 [2024-12-09 11:38:35.120924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607233 ] 00:23:45.874 [2024-12-09 11:38:35.192257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.874 [2024-12-09 11:38:35.228602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.874 [2024-12-09 11:38:36.617113] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9957e2e0-d1f1-4703-aa5f-469552d30dc3 already exists 00:23:45.874 [2024-12-09 11:38:36.617143] bdev.c:8150:bdev_register: *ERROR*: Unable to add uuid:9957e2e0-d1f1-4703-aa5f-469552d30dc3 alias for bdev NVMe1n1 00:23:45.874 [2024-12-09 11:38:36.617152] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:45.874 Running I/O for 1 seconds... 00:23:45.874 24879.00 IOPS, 97.18 MiB/s 00:23:45.874 Latency(us) 00:23:45.874 [2024-12-09T10:38:38.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.874 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:45.874 NVMe0n1 : 1.01 24932.97 97.39 0.00 0.00 5122.36 2143.57 12615.68 00:23:45.874 [2024-12-09T10:38:38.036Z] =================================================================================================================== 00:23:45.874 [2024-12-09T10:38:38.036Z] Total : 24932.97 97.39 0.00 0.00 5122.36 2143.57 12615.68 00:23:45.874 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.874 00:23:45.874 Latency(us) 00:23:45.874 [2024-12-09T10:38:38.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.874 [2024-12-09T10:38:38.036Z] =================================================================================================================== 00:23:45.874 [2024-12-09T10:38:38.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.874 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.874 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.874 rmmod nvme_tcp 00:23:46.135 rmmod nvme_fabrics 00:23:46.135 rmmod nvme_keyring 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3606884 ']' 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3606884 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3606884 ']' 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3606884 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3606884 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3606884' 00:23:46.135 killing process with pid 3606884 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3606884 00:23:46.135 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3606884 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.397 11:38:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.311 00:23:48.311 real 0m14.087s 00:23:48.311 user 0m17.739s 00:23:48.311 sys 0m6.315s 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.311 ************************************ 00:23:48.311 END TEST nvmf_multicontroller 00:23:48.311 ************************************ 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.311 11:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.573 ************************************ 00:23:48.573 START TEST nvmf_aer 00:23:48.573 ************************************ 00:23:48.573 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:48.573 * Looking for test storage... 00:23:48.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.574 --rc genhtml_branch_coverage=1 00:23:48.574 --rc genhtml_function_coverage=1 00:23:48.574 --rc genhtml_legend=1 00:23:48.574 --rc geninfo_all_blocks=1 00:23:48.574 --rc geninfo_unexecuted_blocks=1 00:23:48.574 00:23:48.574 ' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.574 --rc genhtml_branch_coverage=1 00:23:48.574 --rc genhtml_function_coverage=1 00:23:48.574 --rc genhtml_legend=1 00:23:48.574 --rc geninfo_all_blocks=1 00:23:48.574 --rc geninfo_unexecuted_blocks=1 00:23:48.574 00:23:48.574 ' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.574 --rc genhtml_branch_coverage=1 00:23:48.574 --rc genhtml_function_coverage=1 00:23:48.574 --rc genhtml_legend=1 00:23:48.574 --rc geninfo_all_blocks=1 00:23:48.574 --rc geninfo_unexecuted_blocks=1 00:23:48.574 00:23:48.574 ' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:48.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.574 --rc genhtml_branch_coverage=1 00:23:48.574 --rc genhtml_function_coverage=1 00:23:48.574 --rc genhtml_legend=1 00:23:48.574 --rc geninfo_all_blocks=1 00:23:48.574 --rc geninfo_unexecuted_blocks=1 00:23:48.574 00:23:48.574 ' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.574 11:38:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.721 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:56.722 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:56.722 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:56.722 Found net devices under 0000:31:00.0: cvl_0_0 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:56.722 Found net devices under 0000:31:00.1: cvl_0_1 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.722 11:38:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:23:56.722 00:23:56.722 --- 10.0.0.2 ping statistics --- 00:23:56.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.722 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:23:56.722 00:23:56.722 --- 10.0.0.1 ping statistics --- 00:23:56.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.722 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3611986 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3611986 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3611986 ']' 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.722 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.723 11:38:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.723 [2024-12-09 11:38:48.283184] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:23:56.723 [2024-12-09 11:38:48.283233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.723 [2024-12-09 11:38:48.361344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.723 [2024-12-09 11:38:48.397580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.723 [2024-12-09 11:38:48.397610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.723 [2024-12-09 11:38:48.397619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.723 [2024-12-09 11:38:48.397626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.723 [2024-12-09 11:38:48.397632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.723 [2024-12-09 11:38:48.399099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.723 [2024-12-09 11:38:48.399221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.723 [2024-12-09 11:38:48.399376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.723 [2024-12-09 11:38:48.399377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.984 [2024-12-09 11:38:49.130377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.984 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.245 Malloc0 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.245 [2024-12-09 11:38:49.197517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.245 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.245 [ 00:23:57.245 { 00:23:57.245 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:57.245 "subtype": "Discovery", 00:23:57.245 "listen_addresses": [], 00:23:57.245 "allow_any_host": true, 00:23:57.245 "hosts": [] 00:23:57.245 }, 00:23:57.245 { 00:23:57.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.245 "subtype": "NVMe", 00:23:57.245 "listen_addresses": [ 00:23:57.245 { 00:23:57.245 "trtype": "TCP", 00:23:57.245 "adrfam": "IPv4", 00:23:57.245 "traddr": "10.0.0.2", 00:23:57.245 "trsvcid": "4420" 00:23:57.245 } 00:23:57.245 ], 00:23:57.245 "allow_any_host": true, 00:23:57.245 "hosts": [], 00:23:57.245 "serial_number": "SPDK00000000000001", 00:23:57.245 "model_number": "SPDK bdev Controller", 00:23:57.245 "max_namespaces": 2, 00:23:57.245 "min_cntlid": 1, 00:23:57.245 "max_cntlid": 65519, 00:23:57.245 "namespaces": [ 00:23:57.245 { 00:23:57.245 "nsid": 1, 00:23:57.245 "bdev_name": "Malloc0", 00:23:57.245 "name": "Malloc0", 00:23:57.245 "nguid": "B7A6C70255744D828CAEA7600E10A207", 00:23:57.246 "uuid": "b7a6c702-5574-4d82-8cae-a7600e10a207" 00:23:57.246 } 00:23:57.246 ] 00:23:57.246 } 00:23:57.246 ] 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3612337 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:57.246 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.507 Malloc1 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.507 Asynchronous Event Request test 00:23:57.507 Attaching to 10.0.0.2 00:23:57.507 Attached to 10.0.0.2 00:23:57.507 Registering asynchronous event callbacks... 00:23:57.507 Starting namespace attribute notice tests for all controllers... 00:23:57.507 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:57.507 aer_cb - Changed Namespace 00:23:57.507 Cleaning up... 00:23:57.507 [ 00:23:57.507 { 00:23:57.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:57.507 "subtype": "Discovery", 00:23:57.507 "listen_addresses": [], 00:23:57.507 "allow_any_host": true, 00:23:57.507 "hosts": [] 00:23:57.507 }, 00:23:57.507 { 00:23:57.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.507 "subtype": "NVMe", 00:23:57.507 "listen_addresses": [ 00:23:57.507 { 00:23:57.507 "trtype": "TCP", 00:23:57.507 "adrfam": "IPv4", 00:23:57.507 "traddr": "10.0.0.2", 00:23:57.507 "trsvcid": "4420" 00:23:57.507 } 00:23:57.507 ], 00:23:57.507 "allow_any_host": true, 00:23:57.507 "hosts": [], 00:23:57.507 "serial_number": "SPDK00000000000001", 00:23:57.507 "model_number": "SPDK bdev Controller", 00:23:57.507 "max_namespaces": 2, 00:23:57.507 "min_cntlid": 1, 00:23:57.507 "max_cntlid": 65519, 00:23:57.507 "namespaces": [ 00:23:57.507 { 00:23:57.507 "nsid": 1, 00:23:57.507 "bdev_name": "Malloc0", 00:23:57.507 "name": "Malloc0", 00:23:57.507 "nguid": "B7A6C70255744D828CAEA7600E10A207", 00:23:57.507 "uuid": "b7a6c702-5574-4d82-8cae-a7600e10a207" 00:23:57.507 }, 00:23:57.507 { 00:23:57.507 "nsid": 2, 00:23:57.507 "bdev_name": "Malloc1", 00:23:57.507 "name": "Malloc1", 00:23:57.507 "nguid": "2C661417BFFA4C768BF52B2930190836", 00:23:57.507 "uuid": "2c661417-bffa-4c76-8bf5-2b2930190836" 00:23:57.507 } 00:23:57.507 ] 00:23:57.507 } 00:23:57.507 ] 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3612337 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.507 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.508 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.768 rmmod nvme_tcp 00:23:57.768 rmmod nvme_fabrics 00:23:57.768 rmmod nvme_keyring 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3611986 ']' 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3611986 ']' 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3611986' 00:23:57.768 killing process with pid 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3611986 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:57.768 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.028 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.028 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.028 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.028 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.028 11:38:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.939 11:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.939 00:23:59.939 real 0m11.505s 00:23:59.939 user 0m8.363s 00:23:59.939 sys 0m6.044s 00:23:59.939 11:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.939 11:38:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.939 ************************************ 00:23:59.939 END TEST nvmf_aer 00:23:59.939 ************************************ 00:23:59.939 11:38:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:59.939 11:38:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.939 11:38:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.939 11:38:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.939 ************************************ 00:23:59.939 START TEST nvmf_async_init 00:23:59.939 ************************************ 00:23:59.939 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:00.200 * Looking for test storage... 00:24:00.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.200 --rc genhtml_branch_coverage=1 00:24:00.200 --rc genhtml_function_coverage=1 00:24:00.200 --rc genhtml_legend=1 00:24:00.200 --rc geninfo_all_blocks=1 00:24:00.200 --rc geninfo_unexecuted_blocks=1 00:24:00.200 00:24:00.200 ' 00:24:00.200 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.200 --rc genhtml_branch_coverage=1 00:24:00.200 --rc genhtml_function_coverage=1 00:24:00.201 --rc genhtml_legend=1 00:24:00.201 --rc geninfo_all_blocks=1 00:24:00.201 --rc geninfo_unexecuted_blocks=1 00:24:00.201 00:24:00.201 ' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.201 --rc genhtml_branch_coverage=1 00:24:00.201 --rc genhtml_function_coverage=1 00:24:00.201 --rc genhtml_legend=1 00:24:00.201 --rc geninfo_all_blocks=1 00:24:00.201 --rc geninfo_unexecuted_blocks=1 00:24:00.201 00:24:00.201 ' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.201 --rc genhtml_branch_coverage=1 00:24:00.201 --rc genhtml_function_coverage=1 00:24:00.201 --rc genhtml_legend=1 00:24:00.201 --rc geninfo_all_blocks=1 00:24:00.201 --rc geninfo_unexecuted_blocks=1 00:24:00.201 00:24:00.201 ' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d5559e52e777472086ea3c8c44bf3ed7 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.201 11:38:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:08.341 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:08.341 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.341 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:08.342 Found net devices under 0000:31:00.0: cvl_0_0 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:08.342 Found net devices under 0000:31:00.1: cvl_0_1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:24:08.342 00:24:08.342 --- 10.0.0.2 ping statistics --- 00:24:08.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.342 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:08.342 00:24:08.342 --- 10.0.0.1 ping statistics --- 00:24:08.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.342 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3616722 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3616722 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3616722 ']' 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.342 11:38:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.342 [2024-12-09 11:38:59.855006] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:08.342 [2024-12-09 11:38:59.855080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.342 [2024-12-09 11:38:59.940332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.342 [2024-12-09 11:38:59.980784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.342 [2024-12-09 11:38:59.980821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.342 [2024-12-09 11:38:59.980829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.342 [2024-12-09 11:38:59.980836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.342 [2024-12-09 11:38:59.980841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.342 [2024-12-09 11:38:59.981455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 [2024-12-09 11:39:00.679340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 null0 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d5559e52e777472086ea3c8c44bf3ed7 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.603 [2024-12-09 11:39:00.739613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.603 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.863 nvme0n1 00:24:08.863 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.863 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:08.863 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.863 11:39:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.863 [ 00:24:08.863 { 00:24:08.863 "name": "nvme0n1", 00:24:08.863 "aliases": [ 00:24:08.863 "d5559e52-e777-4720-86ea-3c8c44bf3ed7" 00:24:08.863 ], 00:24:08.863 "product_name": "NVMe disk", 00:24:08.863 "block_size": 512, 00:24:08.863 "num_blocks": 2097152, 00:24:08.863 "uuid": "d5559e52-e777-4720-86ea-3c8c44bf3ed7", 00:24:08.863 "numa_id": 0, 00:24:08.863 "assigned_rate_limits": { 00:24:08.863 "rw_ios_per_sec": 0, 00:24:08.863 "rw_mbytes_per_sec": 0, 00:24:08.863 "r_mbytes_per_sec": 0, 00:24:08.863 "w_mbytes_per_sec": 0 00:24:08.863 }, 00:24:08.863 "claimed": false, 00:24:08.863 "zoned": false, 00:24:08.863 "supported_io_types": { 00:24:08.863 "read": true, 00:24:08.863 "write": true, 00:24:08.863 "unmap": false, 00:24:08.863 "flush": true, 00:24:08.863 "reset": true, 00:24:08.863 "nvme_admin": true, 00:24:08.863 "nvme_io": true, 00:24:08.863 "nvme_io_md": false, 00:24:08.863 "write_zeroes": true, 00:24:08.863 "zcopy": false, 00:24:08.863 "get_zone_info": false, 00:24:08.863 "zone_management": false, 00:24:08.863 "zone_append": false, 00:24:08.863 "compare": true, 00:24:08.863 "compare_and_write": true, 00:24:08.863 "abort": true, 00:24:08.863 "seek_hole": false, 00:24:08.863 "seek_data": false, 00:24:08.863 "copy": true, 00:24:08.863 "nvme_iov_md": false 00:24:08.863 }, 00:24:08.863 "memory_domains": [ 00:24:08.863 { 00:24:08.863 "dma_device_id": "system", 00:24:08.863 "dma_device_type": 1 00:24:08.863 } 00:24:08.863 ], 00:24:08.863 "driver_specific": { 00:24:08.863 "nvme": [ 00:24:08.863 { 00:24:08.863 "trid": { 00:24:08.863 "trtype": "TCP", 00:24:08.863 "adrfam": "IPv4", 00:24:08.863 "traddr": "10.0.0.2", 00:24:08.863 "trsvcid": "4420", 00:24:08.863 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.863 }, 00:24:08.863 "ctrlr_data": { 00:24:08.863 "cntlid": 1, 00:24:08.863 "vendor_id": "0x8086", 00:24:08.863 "model_number": "SPDK bdev Controller", 00:24:08.863 "serial_number": "00000000000000000000", 00:24:08.863 "firmware_revision": "25.01", 00:24:08.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.863 "oacs": { 00:24:08.863 "security": 0, 00:24:08.863 "format": 0, 00:24:08.863 "firmware": 0, 00:24:08.863 "ns_manage": 0 00:24:08.863 }, 00:24:08.863 "multi_ctrlr": true, 00:24:08.863 "ana_reporting": false 00:24:08.863 }, 00:24:08.863 "vs": { 00:24:08.863 "nvme_version": "1.3" 00:24:08.863 }, 00:24:08.863 "ns_data": { 00:24:08.863 "id": 1, 00:24:08.863 "can_share": true 00:24:08.863 } 00:24:08.863 } 00:24:08.863 ], 00:24:08.864 "mp_policy": "active_passive" 00:24:08.864 } 00:24:08.864 } 00:24:08.864 ] 00:24:08.864 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.864 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:08.864 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.864 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.864 [2024-12-09 11:39:01.013898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:08.864 [2024-12-09 11:39:01.013961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a96c40 (9): Bad file descriptor 00:24:09.124 [2024-12-09 11:39:01.146109] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 [ 00:24:09.124 { 00:24:09.124 "name": "nvme0n1", 00:24:09.124 "aliases": [ 00:24:09.124 "d5559e52-e777-4720-86ea-3c8c44bf3ed7" 00:24:09.124 ], 00:24:09.124 "product_name": "NVMe disk", 00:24:09.124 "block_size": 512, 00:24:09.124 "num_blocks": 2097152, 00:24:09.124 "uuid": "d5559e52-e777-4720-86ea-3c8c44bf3ed7", 00:24:09.124 "numa_id": 0, 00:24:09.124 "assigned_rate_limits": { 00:24:09.124 "rw_ios_per_sec": 0, 00:24:09.124 "rw_mbytes_per_sec": 0, 00:24:09.124 "r_mbytes_per_sec": 0, 00:24:09.124 "w_mbytes_per_sec": 0 00:24:09.124 }, 00:24:09.124 "claimed": false, 00:24:09.124 "zoned": false, 00:24:09.124 "supported_io_types": { 00:24:09.124 "read": true, 00:24:09.124 "write": true, 00:24:09.124 "unmap": false, 00:24:09.124 "flush": true, 00:24:09.124 "reset": true, 00:24:09.124 "nvme_admin": true, 00:24:09.124 "nvme_io": true, 00:24:09.124 "nvme_io_md": false, 00:24:09.124 "write_zeroes": true, 00:24:09.124 "zcopy": false, 00:24:09.124 "get_zone_info": false, 00:24:09.124 "zone_management": false, 00:24:09.124 "zone_append": false, 00:24:09.124 "compare": true, 00:24:09.124 "compare_and_write": true, 00:24:09.124 "abort": true, 00:24:09.124 "seek_hole": false, 00:24:09.124 "seek_data": false, 00:24:09.124 "copy": true, 00:24:09.124 "nvme_iov_md": false 00:24:09.124 }, 00:24:09.124 "memory_domains": [ 00:24:09.124 { 00:24:09.124 "dma_device_id": "system", 00:24:09.124 "dma_device_type": 1 00:24:09.124 } 00:24:09.124 ], 00:24:09.124 "driver_specific": { 00:24:09.124 "nvme": [ 00:24:09.124 { 00:24:09.124 "trid": { 00:24:09.124 "trtype": "TCP", 00:24:09.124 "adrfam": "IPv4", 00:24:09.124 "traddr": "10.0.0.2", 00:24:09.124 "trsvcid": "4420", 00:24:09.124 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:09.124 }, 00:24:09.124 "ctrlr_data": { 00:24:09.124 "cntlid": 2, 00:24:09.124 "vendor_id": "0x8086", 00:24:09.124 "model_number": "SPDK bdev Controller", 00:24:09.124 "serial_number": "00000000000000000000", 00:24:09.124 "firmware_revision": "25.01", 00:24:09.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.124 "oacs": { 00:24:09.124 "security": 0, 00:24:09.124 "format": 0, 00:24:09.124 "firmware": 0, 00:24:09.124 "ns_manage": 0 00:24:09.124 }, 00:24:09.124 "multi_ctrlr": true, 00:24:09.124 "ana_reporting": false 00:24:09.124 }, 00:24:09.124 "vs": { 00:24:09.124 "nvme_version": "1.3" 00:24:09.124 }, 00:24:09.124 "ns_data": { 00:24:09.124 "id": 1, 00:24:09.124 "can_share": true 00:24:09.124 } 00:24:09.124 } 00:24:09.124 ], 00:24:09.124 "mp_policy": "active_passive" 00:24:09.124 } 00:24:09.124 } 00:24:09.124 ] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WttNZPtFUT 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WttNZPtFUT 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.WttNZPtFUT 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 [2024-12-09 11:39:01.234571] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.124 [2024-12-09 11:39:01.234681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.124 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.124 [2024-12-09 11:39:01.258651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.385 nvme0n1 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.385 [ 00:24:09.385 { 00:24:09.385 "name": "nvme0n1", 00:24:09.385 "aliases": [ 00:24:09.385 "d5559e52-e777-4720-86ea-3c8c44bf3ed7" 00:24:09.385 ], 00:24:09.385 "product_name": "NVMe disk", 00:24:09.385 "block_size": 512, 00:24:09.385 "num_blocks": 2097152, 00:24:09.385 "uuid": "d5559e52-e777-4720-86ea-3c8c44bf3ed7", 00:24:09.385 "numa_id": 0, 00:24:09.385 "assigned_rate_limits": { 00:24:09.385 "rw_ios_per_sec": 0, 00:24:09.385 "rw_mbytes_per_sec": 0, 00:24:09.385 "r_mbytes_per_sec": 0, 00:24:09.385 "w_mbytes_per_sec": 0 00:24:09.385 }, 00:24:09.385 "claimed": false, 00:24:09.385 "zoned": false, 00:24:09.385 "supported_io_types": { 00:24:09.385 "read": true, 00:24:09.385 "write": true, 00:24:09.385 "unmap": false, 00:24:09.385 "flush": true, 00:24:09.385 "reset": true, 00:24:09.385 "nvme_admin": true, 00:24:09.385 "nvme_io": true, 00:24:09.385 "nvme_io_md": false, 00:24:09.385 "write_zeroes": true, 00:24:09.385 "zcopy": false, 00:24:09.385 "get_zone_info": false, 00:24:09.385 "zone_management": false, 00:24:09.385 "zone_append": false, 00:24:09.385 "compare": true, 00:24:09.385 "compare_and_write": true, 00:24:09.385 "abort": true, 00:24:09.385 "seek_hole": false, 00:24:09.385 "seek_data": false, 00:24:09.385 "copy": true, 00:24:09.385 "nvme_iov_md": false 00:24:09.385 }, 00:24:09.385 "memory_domains": [ 00:24:09.385 { 00:24:09.385 "dma_device_id": "system", 00:24:09.385 "dma_device_type": 1 00:24:09.385 } 00:24:09.385 ], 00:24:09.385 "driver_specific": { 00:24:09.385 "nvme": [ 00:24:09.385 { 00:24:09.385 "trid": { 00:24:09.385 "trtype": "TCP", 00:24:09.385 "adrfam": "IPv4", 00:24:09.385 "traddr": "10.0.0.2", 00:24:09.385 "trsvcid": "4421", 00:24:09.385 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:09.385 }, 00:24:09.385 "ctrlr_data": { 00:24:09.385 "cntlid": 3, 00:24:09.385 "vendor_id": "0x8086", 00:24:09.385 "model_number": "SPDK bdev Controller", 00:24:09.385 "serial_number": "00000000000000000000", 00:24:09.385 "firmware_revision": "25.01", 00:24:09.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.385 "oacs": { 00:24:09.385 "security": 0, 00:24:09.385 "format": 0, 00:24:09.385 "firmware": 0, 00:24:09.385 "ns_manage": 0 00:24:09.385 }, 00:24:09.385 "multi_ctrlr": true, 00:24:09.385 "ana_reporting": false 00:24:09.385 }, 00:24:09.385 "vs": { 00:24:09.385 "nvme_version": "1.3" 00:24:09.385 }, 00:24:09.385 "ns_data": { 00:24:09.385 "id": 1, 00:24:09.385 "can_share": true 00:24:09.385 } 00:24:09.385 } 00:24:09.385 ], 00:24:09.385 "mp_policy": "active_passive" 00:24:09.385 } 00:24:09.385 } 00:24:09.385 ] 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.WttNZPtFUT 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.385 rmmod nvme_tcp 00:24:09.385 rmmod nvme_fabrics 00:24:09.385 rmmod nvme_keyring 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3616722 ']' 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3616722 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3616722 ']' 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3616722 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3616722 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3616722' 00:24:09.385 killing process with pid 3616722 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3616722 00:24:09.385 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3616722 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.645 11:39:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.555 11:39:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.815 00:24:11.815 real 0m11.646s 00:24:11.815 user 0m4.199s 00:24:11.815 sys 0m5.971s 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.815 ************************************ 00:24:11.815 END TEST nvmf_async_init 00:24:11.815 ************************************ 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.815 ************************************ 00:24:11.815 START TEST dma 00:24:11.815 ************************************ 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.815 * Looking for test storage... 00:24:11.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.815 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.816 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.077 --rc genhtml_branch_coverage=1 00:24:12.077 --rc genhtml_function_coverage=1 00:24:12.077 --rc genhtml_legend=1 00:24:12.077 --rc geninfo_all_blocks=1 00:24:12.077 --rc geninfo_unexecuted_blocks=1 00:24:12.077 00:24:12.077 ' 00:24:12.077 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.078 --rc genhtml_branch_coverage=1 00:24:12.078 --rc genhtml_function_coverage=1 00:24:12.078 --rc genhtml_legend=1 00:24:12.078 --rc geninfo_all_blocks=1 00:24:12.078 --rc geninfo_unexecuted_blocks=1 00:24:12.078 00:24:12.078 ' 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.078 --rc genhtml_branch_coverage=1 00:24:12.078 --rc genhtml_function_coverage=1 00:24:12.078 --rc genhtml_legend=1 00:24:12.078 --rc geninfo_all_blocks=1 00:24:12.078 --rc geninfo_unexecuted_blocks=1 00:24:12.078 00:24:12.078 ' 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:12.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.078 --rc genhtml_branch_coverage=1 00:24:12.078 --rc genhtml_function_coverage=1 00:24:12.078 --rc genhtml_legend=1 00:24:12.078 --rc geninfo_all_blocks=1 00:24:12.078 --rc geninfo_unexecuted_blocks=1 00:24:12.078 00:24:12.078 ' 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.078 11:39:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:12.078 00:24:12.078 real 0m0.229s 00:24:12.078 user 0m0.141s 00:24:12.078 sys 0m0.101s 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:12.078 ************************************ 00:24:12.078 END TEST dma 00:24:12.078 ************************************ 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.078 ************************************ 00:24:12.078 START TEST nvmf_identify 00:24:12.078 ************************************ 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:12.078 * Looking for test storage... 00:24:12.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:12.078 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:12.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.340 --rc genhtml_branch_coverage=1 00:24:12.340 --rc genhtml_function_coverage=1 00:24:12.340 --rc genhtml_legend=1 00:24:12.340 --rc geninfo_all_blocks=1 00:24:12.340 --rc geninfo_unexecuted_blocks=1 00:24:12.340 00:24:12.340 ' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:12.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.340 --rc genhtml_branch_coverage=1 00:24:12.340 --rc genhtml_function_coverage=1 00:24:12.340 --rc genhtml_legend=1 00:24:12.340 --rc geninfo_all_blocks=1 00:24:12.340 --rc geninfo_unexecuted_blocks=1 00:24:12.340 00:24:12.340 ' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:12.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.340 --rc genhtml_branch_coverage=1 00:24:12.340 --rc genhtml_function_coverage=1 00:24:12.340 --rc genhtml_legend=1 00:24:12.340 --rc geninfo_all_blocks=1 00:24:12.340 --rc geninfo_unexecuted_blocks=1 00:24:12.340 00:24:12.340 ' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:12.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.340 --rc genhtml_branch_coverage=1 00:24:12.340 --rc genhtml_function_coverage=1 00:24:12.340 --rc genhtml_legend=1 00:24:12.340 --rc geninfo_all_blocks=1 00:24:12.340 --rc geninfo_unexecuted_blocks=1 00:24:12.340 00:24:12.340 ' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.340 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.341 11:39:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.477 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:20.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:20.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:20.478 Found net devices under 0000:31:00.0: cvl_0_0 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:20.478 Found net devices under 0000:31:00.1: cvl_0_1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.478 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:24:20.479 00:24:20.479 --- 10.0.0.2 ping statistics --- 00:24:20.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.479 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:24:20.479 11:39:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:24:20.479 00:24:20.479 --- 10.0.0.1 ping statistics --- 00:24:20.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.479 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3622071 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3622071 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3622071 ']' 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.479 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.479 [2024-12-09 11:39:12.115292] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:20.479 [2024-12-09 11:39:12.115348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.479 [2024-12-09 11:39:12.196659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.479 [2024-12-09 11:39:12.235266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.479 [2024-12-09 11:39:12.235306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.479 [2024-12-09 11:39:12.235314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.479 [2024-12-09 11:39:12.235320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.479 [2024-12-09 11:39:12.235326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.479 [2024-12-09 11:39:12.236884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.479 [2024-12-09 11:39:12.237020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.479 [2024-12-09 11:39:12.237188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.479 [2024-12-09 11:39:12.237295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 [2024-12-09 11:39:12.920502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 Malloc0 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 [2024-12-09 11:39:13.034423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.050 [ 00:24:21.050 { 00:24:21.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:21.050 "subtype": "Discovery", 00:24:21.050 "listen_addresses": [ 00:24:21.050 { 00:24:21.050 "trtype": "TCP", 00:24:21.050 "adrfam": "IPv4", 00:24:21.050 "traddr": "10.0.0.2", 00:24:21.050 "trsvcid": "4420" 00:24:21.050 } 00:24:21.050 ], 00:24:21.050 "allow_any_host": true, 00:24:21.050 "hosts": [] 00:24:21.050 }, 00:24:21.050 { 00:24:21.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.050 "subtype": "NVMe", 00:24:21.050 "listen_addresses": [ 00:24:21.050 { 00:24:21.050 "trtype": "TCP", 00:24:21.050 "adrfam": "IPv4", 00:24:21.050 "traddr": "10.0.0.2", 00:24:21.050 "trsvcid": "4420" 00:24:21.050 } 00:24:21.050 ], 00:24:21.050 "allow_any_host": true, 00:24:21.050 "hosts": [], 00:24:21.050 "serial_number": "SPDK00000000000001", 00:24:21.050 "model_number": "SPDK bdev Controller", 00:24:21.050 "max_namespaces": 32, 00:24:21.050 "min_cntlid": 1, 00:24:21.050 "max_cntlid": 65519, 00:24:21.050 "namespaces": [ 00:24:21.050 { 00:24:21.050 "nsid": 1, 00:24:21.050 "bdev_name": "Malloc0", 00:24:21.050 "name": "Malloc0", 00:24:21.050 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:21.050 "eui64": "ABCDEF0123456789", 00:24:21.050 "uuid": "9bdc468c-abe1-434c-9d11-00a64199a24e" 00:24:21.050 } 00:24:21.050 ] 00:24:21.050 } 00:24:21.050 ] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.050 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:21.050 [2024-12-09 11:39:13.096123] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:21.050 [2024-12-09 11:39:13.096182] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622120 ] 00:24:21.050 [2024-12-09 11:39:13.149187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:21.050 [2024-12-09 11:39:13.149239] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:21.050 [2024-12-09 11:39:13.149245] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:21.050 [2024-12-09 11:39:13.149261] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:21.050 [2024-12-09 11:39:13.149270] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:21.050 [2024-12-09 11:39:13.153313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:21.050 [2024-12-09 11:39:13.153347] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcee550 0 00:24:21.050 [2024-12-09 11:39:13.161019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:21.050 [2024-12-09 11:39:13.161033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:21.050 [2024-12-09 11:39:13.161041] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:21.050 [2024-12-09 11:39:13.161044] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:21.050 [2024-12-09 11:39:13.161078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.050 [2024-12-09 11:39:13.161084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.050 [2024-12-09 11:39:13.161089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.161103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:21.051 [2024-12-09 11:39:13.161120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.169022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.169032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.169036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.169057] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:21.051 [2024-12-09 11:39:13.169064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:21.051 [2024-12-09 11:39:13.169070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:21.051 [2024-12-09 11:39:13.169086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.169102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.169115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.169325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.169332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.169336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.169348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:21.051 [2024-12-09 11:39:13.169355] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:21.051 [2024-12-09 11:39:13.169362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.169376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.169387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.169583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.169590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.169594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.169603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:21.051 [2024-12-09 11:39:13.169611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.169618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.169633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.169643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.169858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.169865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.169868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.169881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.169891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.169898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.169905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.169916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.170084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.170091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.170094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.170103] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:21.051 [2024-12-09 11:39:13.170108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.170116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.170226] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:21.051 [2024-12-09 11:39:13.170231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.170241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.170255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.170266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.170424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.170431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.170435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.170444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:21.051 [2024-12-09 11:39:13.170453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.170467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.170477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.170665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.051 [2024-12-09 11:39:13.170672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.051 [2024-12-09 11:39:13.170675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.051 [2024-12-09 11:39:13.170690] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:21.051 [2024-12-09 11:39:13.170695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:21.051 [2024-12-09 11:39:13.170703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:21.051 [2024-12-09 11:39:13.170710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:21.051 [2024-12-09 11:39:13.170719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.051 [2024-12-09 11:39:13.170730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-12-09 11:39:13.170740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.051 [2024-12-09 11:39:13.170975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.051 [2024-12-09 11:39:13.170982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.051 [2024-12-09 11:39:13.170985] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.170989] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee550): datao=0, datal=4096, cccid=0 00:24:21.051 [2024-12-09 11:39:13.170994] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50100) on tqpair(0xcee550): expected_datao=0, payload_size=4096 00:24:21.051 [2024-12-09 11:39:13.170999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.051 [2024-12-09 11:39:13.171020] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.052 [2024-12-09 11:39:13.171025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.212212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.212216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.212231] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:21.316 [2024-12-09 11:39:13.212237] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:21.316 [2024-12-09 11:39:13.212242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:21.316 [2024-12-09 11:39:13.212248] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:21.316 [2024-12-09 11:39:13.212252] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:21.316 [2024-12-09 11:39:13.212258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:21.316 [2024-12-09 11:39:13.212267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:21.316 [2024-12-09 11:39:13.212274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.316 [2024-12-09 11:39:13.212305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.316 [2024-12-09 11:39:13.212528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.212535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.212539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.212551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.316 [2024-12-09 11:39:13.212572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.316 [2024-12-09 11:39:13.212592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.316 [2024-12-09 11:39:13.212612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.316 [2024-12-09 11:39:13.212630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:21.316 [2024-12-09 11:39:13.212641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:21.316 [2024-12-09 11:39:13.212648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.316 [2024-12-09 11:39:13.212671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50100, cid 0, qid 0 00:24:21.316 [2024-12-09 11:39:13.212677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50280, cid 1, qid 0 00:24:21.316 [2024-12-09 11:39:13.212682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50400, cid 2, qid 0 00:24:21.316 [2024-12-09 11:39:13.212687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.316 [2024-12-09 11:39:13.212691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:24:21.316 [2024-12-09 11:39:13.212911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.212918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.212923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.212934] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:21.316 [2024-12-09 11:39:13.212939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:21.316 [2024-12-09 11:39:13.212950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.212954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.212960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.316 [2024-12-09 11:39:13.212971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:24:21.316 [2024-12-09 11:39:13.217019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.316 [2024-12-09 11:39:13.217026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.316 [2024-12-09 11:39:13.217030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee550): datao=0, datal=4096, cccid=4 00:24:21.316 [2024-12-09 11:39:13.217039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee550): expected_datao=0, payload_size=4096 00:24:21.316 [2024-12-09 11:39:13.217043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217050] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217054] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.217066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.217069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.217086] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:21.316 [2024-12-09 11:39:13.217109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.217120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.316 [2024-12-09 11:39:13.217127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.217141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.316 [2024-12-09 11:39:13.217155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:24:21.316 [2024-12-09 11:39:13.217161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50880, cid 5, qid 0 00:24:21.316 [2024-12-09 11:39:13.217378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.316 [2024-12-09 11:39:13.217385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.316 [2024-12-09 11:39:13.217389] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee550): datao=0, datal=1024, cccid=4 00:24:21.316 [2024-12-09 11:39:13.217399] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee550): expected_datao=0, payload_size=1024 00:24:21.316 [2024-12-09 11:39:13.217404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217414] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.217426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.217429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.217433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50880) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.259188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.316 [2024-12-09 11:39:13.259199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.316 [2024-12-09 11:39:13.259203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.259207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee550 00:24:21.316 [2024-12-09 11:39:13.259218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.316 [2024-12-09 11:39:13.259222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee550) 00:24:21.316 [2024-12-09 11:39:13.259229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.316 [2024-12-09 11:39:13.259245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:24:21.316 [2024-12-09 11:39:13.259497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.317 [2024-12-09 11:39:13.259504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.317 [2024-12-09 11:39:13.259507] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.259511] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee550): datao=0, datal=3072, cccid=4 00:24:21.317 [2024-12-09 11:39:13.259516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee550): expected_datao=0, payload_size=3072 00:24:21.317 [2024-12-09 11:39:13.259520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.259537] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.259542] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.317 [2024-12-09 11:39:13.305030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.317 [2024-12-09 11:39:13.305034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.305049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcee550) 00:24:21.317 [2024-12-09 11:39:13.305061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.317 [2024-12-09 11:39:13.305077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50700, cid 4, qid 0 00:24:21.317 [2024-12-09 11:39:13.305242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.317 [2024-12-09 11:39:13.305249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.317 [2024-12-09 11:39:13.305253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcee550): datao=0, datal=8, cccid=4 00:24:21.317 [2024-12-09 11:39:13.305261] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd50700) on tqpair(0xcee550): expected_datao=0, payload_size=8 00:24:21.317 [2024-12-09 11:39:13.305269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305276] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.305280] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.317 [2024-12-09 11:39:13.346212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.317 [2024-12-09 11:39:13.346216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50700) on tqpair=0xcee550 00:24:21.317 ===================================================== 00:24:21.317 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:21.317 ===================================================== 00:24:21.317 Controller Capabilities/Features 00:24:21.317 ================================ 00:24:21.317 Vendor ID: 0000 00:24:21.317 Subsystem Vendor ID: 0000 00:24:21.317 Serial Number: .................... 00:24:21.317 Model Number: ........................................ 00:24:21.317 Firmware Version: 25.01 00:24:21.317 Recommended Arb Burst: 0 00:24:21.317 IEEE OUI Identifier: 00 00 00 00:24:21.317 Multi-path I/O 00:24:21.317 May have multiple subsystem ports: No 00:24:21.317 May have multiple controllers: No 00:24:21.317 Associated with SR-IOV VF: No 00:24:21.317 Max Data Transfer Size: 131072 00:24:21.317 Max Number of Namespaces: 0 00:24:21.317 Max Number of I/O Queues: 1024 00:24:21.317 NVMe Specification Version (VS): 1.3 00:24:21.317 NVMe Specification Version (Identify): 1.3 00:24:21.317 Maximum Queue Entries: 128 00:24:21.317 Contiguous Queues Required: Yes 00:24:21.317 Arbitration Mechanisms Supported 00:24:21.317 Weighted Round Robin: Not Supported 00:24:21.317 Vendor Specific: Not Supported 00:24:21.317 Reset Timeout: 15000 ms 00:24:21.317 Doorbell Stride: 4 bytes 00:24:21.317 NVM Subsystem Reset: Not Supported 00:24:21.317 Command Sets Supported 00:24:21.317 NVM Command Set: Supported 00:24:21.317 Boot Partition: Not Supported 00:24:21.317 Memory Page Size Minimum: 4096 bytes 00:24:21.317 Memory Page Size Maximum: 4096 bytes 00:24:21.317 Persistent Memory Region: Not Supported 00:24:21.317 Optional Asynchronous Events Supported 00:24:21.317 Namespace Attribute Notices: Not Supported 00:24:21.317 Firmware Activation Notices: Not Supported 00:24:21.317 ANA Change Notices: Not Supported 00:24:21.317 PLE Aggregate Log Change Notices: Not Supported 00:24:21.317 LBA Status Info Alert Notices: Not Supported 00:24:21.317 EGE Aggregate Log Change Notices: Not Supported 00:24:21.317 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.317 Zone Descriptor Change Notices: Not Supported 00:24:21.317 Discovery Log Change Notices: Supported 00:24:21.317 Controller Attributes 00:24:21.317 128-bit Host Identifier: Not Supported 00:24:21.317 Non-Operational Permissive Mode: Not Supported 00:24:21.317 NVM Sets: Not Supported 00:24:21.317 Read Recovery Levels: Not Supported 00:24:21.317 Endurance Groups: Not Supported 00:24:21.317 Predictable Latency Mode: Not Supported 00:24:21.317 Traffic Based Keep ALive: Not Supported 00:24:21.317 Namespace Granularity: Not Supported 00:24:21.317 SQ Associations: Not Supported 00:24:21.317 UUID List: Not Supported 00:24:21.317 Multi-Domain Subsystem: Not Supported 00:24:21.317 Fixed Capacity Management: Not Supported 00:24:21.317 Variable Capacity Management: Not Supported 00:24:21.317 Delete Endurance Group: Not Supported 00:24:21.317 Delete NVM Set: Not Supported 00:24:21.317 Extended LBA Formats Supported: Not Supported 00:24:21.317 Flexible Data Placement Supported: Not Supported 00:24:21.317 00:24:21.317 Controller Memory Buffer Support 00:24:21.317 ================================ 00:24:21.317 Supported: No 00:24:21.317 00:24:21.317 Persistent Memory Region Support 00:24:21.317 ================================ 00:24:21.317 Supported: No 00:24:21.317 00:24:21.317 Admin Command Set Attributes 00:24:21.317 ============================ 00:24:21.317 Security Send/Receive: Not Supported 00:24:21.317 Format NVM: Not Supported 00:24:21.317 Firmware Activate/Download: Not Supported 00:24:21.317 Namespace Management: Not Supported 00:24:21.317 Device Self-Test: Not Supported 00:24:21.317 Directives: Not Supported 00:24:21.317 NVMe-MI: Not Supported 00:24:21.317 Virtualization Management: Not Supported 00:24:21.317 Doorbell Buffer Config: Not Supported 00:24:21.317 Get LBA Status Capability: Not Supported 00:24:21.317 Command & Feature Lockdown Capability: Not Supported 00:24:21.317 Abort Command Limit: 1 00:24:21.317 Async Event Request Limit: 4 00:24:21.317 Number of Firmware Slots: N/A 00:24:21.317 Firmware Slot 1 Read-Only: N/A 00:24:21.317 Firmware Activation Without Reset: N/A 00:24:21.317 Multiple Update Detection Support: N/A 00:24:21.317 Firmware Update Granularity: No Information Provided 00:24:21.317 Per-Namespace SMART Log: No 00:24:21.317 Asymmetric Namespace Access Log Page: Not Supported 00:24:21.317 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:21.317 Command Effects Log Page: Not Supported 00:24:21.317 Get Log Page Extended Data: Supported 00:24:21.317 Telemetry Log Pages: Not Supported 00:24:21.317 Persistent Event Log Pages: Not Supported 00:24:21.317 Supported Log Pages Log Page: May Support 00:24:21.317 Commands Supported & Effects Log Page: Not Supported 00:24:21.317 Feature Identifiers & Effects Log Page:May Support 00:24:21.317 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.317 Data Area 4 for Telemetry Log: Not Supported 00:24:21.317 Error Log Page Entries Supported: 128 00:24:21.317 Keep Alive: Not Supported 00:24:21.317 00:24:21.317 NVM Command Set Attributes 00:24:21.317 ========================== 00:24:21.317 Submission Queue Entry Size 00:24:21.317 Max: 1 00:24:21.317 Min: 1 00:24:21.317 Completion Queue Entry Size 00:24:21.317 Max: 1 00:24:21.317 Min: 1 00:24:21.317 Number of Namespaces: 0 00:24:21.317 Compare Command: Not Supported 00:24:21.317 Write Uncorrectable Command: Not Supported 00:24:21.317 Dataset Management Command: Not Supported 00:24:21.317 Write Zeroes Command: Not Supported 00:24:21.317 Set Features Save Field: Not Supported 00:24:21.317 Reservations: Not Supported 00:24:21.317 Timestamp: Not Supported 00:24:21.317 Copy: Not Supported 00:24:21.317 Volatile Write Cache: Not Present 00:24:21.317 Atomic Write Unit (Normal): 1 00:24:21.317 Atomic Write Unit (PFail): 1 00:24:21.317 Atomic Compare & Write Unit: 1 00:24:21.317 Fused Compare & Write: Supported 00:24:21.317 Scatter-Gather List 00:24:21.317 SGL Command Set: Supported 00:24:21.317 SGL Keyed: Supported 00:24:21.317 SGL Bit Bucket Descriptor: Not Supported 00:24:21.317 SGL Metadata Pointer: Not Supported 00:24:21.317 Oversized SGL: Not Supported 00:24:21.317 SGL Metadata Address: Not Supported 00:24:21.317 SGL Offset: Supported 00:24:21.317 Transport SGL Data Block: Not Supported 00:24:21.317 Replay Protected Memory Block: Not Supported 00:24:21.317 00:24:21.317 Firmware Slot Information 00:24:21.317 ========================= 00:24:21.317 Active slot: 0 00:24:21.317 00:24:21.317 00:24:21.317 Error Log 00:24:21.317 ========= 00:24:21.317 00:24:21.317 Active Namespaces 00:24:21.317 ================= 00:24:21.317 Discovery Log Page 00:24:21.317 ================== 00:24:21.317 Generation Counter: 2 00:24:21.317 Number of Records: 2 00:24:21.317 Record Format: 0 00:24:21.317 00:24:21.317 Discovery Log Entry 0 00:24:21.317 ---------------------- 00:24:21.317 Transport Type: 3 (TCP) 00:24:21.317 Address Family: 1 (IPv4) 00:24:21.317 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:21.317 Entry Flags: 00:24:21.317 Duplicate Returned Information: 1 00:24:21.317 Explicit Persistent Connection Support for Discovery: 1 00:24:21.317 Transport Requirements: 00:24:21.317 Secure Channel: Not Required 00:24:21.317 Port ID: 0 (0x0000) 00:24:21.317 Controller ID: 65535 (0xffff) 00:24:21.317 Admin Max SQ Size: 128 00:24:21.317 Transport Service Identifier: 4420 00:24:21.317 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:21.317 Transport Address: 10.0.0.2 00:24:21.317 Discovery Log Entry 1 00:24:21.317 ---------------------- 00:24:21.317 Transport Type: 3 (TCP) 00:24:21.317 Address Family: 1 (IPv4) 00:24:21.317 Subsystem Type: 2 (NVM Subsystem) 00:24:21.317 Entry Flags: 00:24:21.317 Duplicate Returned Information: 0 00:24:21.317 Explicit Persistent Connection Support for Discovery: 0 00:24:21.317 Transport Requirements: 00:24:21.317 Secure Channel: Not Required 00:24:21.317 Port ID: 0 (0x0000) 00:24:21.317 Controller ID: 65535 (0xffff) 00:24:21.317 Admin Max SQ Size: 128 00:24:21.317 Transport Service Identifier: 4420 00:24:21.317 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:21.317 Transport Address: 10.0.0.2 [2024-12-09 11:39:13.346309] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:21.317 [2024-12-09 11:39:13.346321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50100) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.317 [2024-12-09 11:39:13.346333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50280) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.317 [2024-12-09 11:39:13.346343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50400) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.317 [2024-12-09 11:39:13.346353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.317 [2024-12-09 11:39:13.346368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.317 [2024-12-09 11:39:13.346384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.317 [2024-12-09 11:39:13.346398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.317 [2024-12-09 11:39:13.346470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.317 [2024-12-09 11:39:13.346476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.317 [2024-12-09 11:39:13.346480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.317 [2024-12-09 11:39:13.346505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.317 [2024-12-09 11:39:13.346518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.317 [2024-12-09 11:39:13.346728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.317 [2024-12-09 11:39:13.346735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.317 [2024-12-09 11:39:13.346739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.317 [2024-12-09 11:39:13.346747] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:21.317 [2024-12-09 11:39:13.346754] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:21.317 [2024-12-09 11:39:13.346764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.317 [2024-12-09 11:39:13.346771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.317 [2024-12-09 11:39:13.346778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.317 [2024-12-09 11:39:13.346789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.317 [2024-12-09 11:39:13.346968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.317 [2024-12-09 11:39:13.346975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.346978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.346982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.346993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.346997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.318 [2024-12-09 11:39:13.347007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.347023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.318 [2024-12-09 11:39:13.347195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.347202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.347205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.347219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.318 [2024-12-09 11:39:13.347233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.347243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.318 [2024-12-09 11:39:13.347460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.347466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.347470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.347483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.318 [2024-12-09 11:39:13.347497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.347507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.318 [2024-12-09 11:39:13.347680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.347686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.347690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.347705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.318 [2024-12-09 11:39:13.347720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.347730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.318 [2024-12-09 11:39:13.347933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.347940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.347943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.347957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.347964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcee550) 00:24:21.318 [2024-12-09 11:39:13.347971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.347981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd50580, cid 3, qid 0 00:24:21.318 [2024-12-09 11:39:13.352018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.352026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.352030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.352034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd50580) on tqpair=0xcee550 00:24:21.318 [2024-12-09 11:39:13.352042] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:21.318 00:24:21.318 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:21.318 [2024-12-09 11:39:13.394502] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:21.318 [2024-12-09 11:39:13.394544] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622240 ] 00:24:21.318 [2024-12-09 11:39:13.450057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:21.318 [2024-12-09 11:39:13.450102] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:21.318 [2024-12-09 11:39:13.450107] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:21.318 [2024-12-09 11:39:13.450123] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:21.318 [2024-12-09 11:39:13.450130] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:21.318 [2024-12-09 11:39:13.450731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:21.318 [2024-12-09 11:39:13.450758] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x159d550 0 00:24:21.318 [2024-12-09 11:39:13.457021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:21.318 [2024-12-09 11:39:13.457038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:21.318 [2024-12-09 11:39:13.457045] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:21.318 [2024-12-09 11:39:13.457048] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:21.318 [2024-12-09 11:39:13.457075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.457080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.457084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.457095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:21.318 [2024-12-09 11:39:13.457112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.465020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.465029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.465032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.465045] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:21.318 [2024-12-09 11:39:13.465052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:21.318 [2024-12-09 11:39:13.465057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:21.318 [2024-12-09 11:39:13.465070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.465085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.465098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.465279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.465286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.465290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.465300] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:21.318 [2024-12-09 11:39:13.465308] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:21.318 [2024-12-09 11:39:13.465315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.465329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.465339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.465538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.465545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.465548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.465557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:21.318 [2024-12-09 11:39:13.465568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.465575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.465590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.465600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.465805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.465812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.465815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.465824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.465833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.465840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.465847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.465858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.466073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.466080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.466083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.466092] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:21.318 [2024-12-09 11:39:13.466097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.466104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.466212] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:21.318 [2024-12-09 11:39:13.466217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.466224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.466238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.466249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.466404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.466411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.466414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.466425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:21.318 [2024-12-09 11:39:13.466434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.466448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.466459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.466641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.466647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.466651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.466659] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:21.318 [2024-12-09 11:39:13.466664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:21.318 [2024-12-09 11:39:13.466671] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:21.318 [2024-12-09 11:39:13.466684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:21.318 [2024-12-09 11:39:13.466692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.318 [2024-12-09 11:39:13.466703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.318 [2024-12-09 11:39:13.466713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.318 [2024-12-09 11:39:13.466904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.318 [2024-12-09 11:39:13.466911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.318 [2024-12-09 11:39:13.466914] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466918] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=4096, cccid=0 00:24:21.318 [2024-12-09 11:39:13.466923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff100) on tqpair(0x159d550): expected_datao=0, payload_size=4096 00:24:21.318 [2024-12-09 11:39:13.466927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466935] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.466939] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.467107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.318 [2024-12-09 11:39:13.467114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.318 [2024-12-09 11:39:13.467117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.318 [2024-12-09 11:39:13.467121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.318 [2024-12-09 11:39:13.467131] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:21.318 [2024-12-09 11:39:13.467136] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:21.318 [2024-12-09 11:39:13.467142] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:21.319 [2024-12-09 11:39:13.467146] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:21.319 [2024-12-09 11:39:13.467151] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:21.319 [2024-12-09 11:39:13.467156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.319 [2024-12-09 11:39:13.467196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.319 [2024-12-09 11:39:13.467379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.319 [2024-12-09 11:39:13.467385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.319 [2024-12-09 11:39:13.467389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.319 [2024-12-09 11:39:13.467399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.319 [2024-12-09 11:39:13.467419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.319 [2024-12-09 11:39:13.467438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.319 [2024-12-09 11:39:13.467457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.319 [2024-12-09 11:39:13.467474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-12-09 11:39:13.467516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff100, cid 0, qid 0 00:24:21.319 [2024-12-09 11:39:13.467521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff280, cid 1, qid 0 00:24:21.319 [2024-12-09 11:39:13.467526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff400, cid 2, qid 0 00:24:21.319 [2024-12-09 11:39:13.467531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.319 [2024-12-09 11:39:13.467536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.319 [2024-12-09 11:39:13.467722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.319 [2024-12-09 11:39:13.467728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.319 [2024-12-09 11:39:13.467731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.319 [2024-12-09 11:39:13.467740] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:21.319 [2024-12-09 11:39:13.467745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.467766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.467780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.319 [2024-12-09 11:39:13.467790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.319 [2024-12-09 11:39:13.467962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.319 [2024-12-09 11:39:13.467968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.319 [2024-12-09 11:39:13.467972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.467976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.319 [2024-12-09 11:39:13.468043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.468052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.468060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.468070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-12-09 11:39:13.468081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.319 [2024-12-09 11:39:13.468268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.319 [2024-12-09 11:39:13.468275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.319 [2024-12-09 11:39:13.468280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=4096, cccid=4 00:24:21.319 [2024-12-09 11:39:13.468288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff700) on tqpair(0x159d550): expected_datao=0, payload_size=4096 00:24:21.319 [2024-12-09 11:39:13.468293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468317] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.319 [2024-12-09 11:39:13.468508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.319 [2024-12-09 11:39:13.468511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.319 [2024-12-09 11:39:13.468524] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:21.319 [2024-12-09 11:39:13.468534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.468543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.468550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.468560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-12-09 11:39:13.468571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.319 [2024-12-09 11:39:13.468790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.319 [2024-12-09 11:39:13.468796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.319 [2024-12-09 11:39:13.468800] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468803] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=4096, cccid=4 00:24:21.319 [2024-12-09 11:39:13.468808] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff700) on tqpair(0x159d550): expected_datao=0, payload_size=4096 00:24:21.319 [2024-12-09 11:39:13.468812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468819] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468822] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.468997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.319 [2024-12-09 11:39:13.469003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.319 [2024-12-09 11:39:13.469007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.473016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.319 [2024-12-09 11:39:13.473029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.473039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:21.319 [2024-12-09 11:39:13.473047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.319 [2024-12-09 11:39:13.473051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.319 [2024-12-09 11:39:13.473057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.319 [2024-12-09 11:39:13.473071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.582 [2024-12-09 11:39:13.473239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.582 [2024-12-09 11:39:13.473246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.582 [2024-12-09 11:39:13.473251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.582 [2024-12-09 11:39:13.473255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=4096, cccid=4 00:24:21.582 [2024-12-09 11:39:13.473260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff700) on tqpair(0x159d550): expected_datao=0, payload_size=4096 00:24:21.583 [2024-12-09 11:39:13.473266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.473491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.473494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.473505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473545] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:21.583 [2024-12-09 11:39:13.473550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:21.583 [2024-12-09 11:39:13.473555] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:21.583 [2024-12-09 11:39:13.473568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.473579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.473585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.473599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.583 [2024-12-09 11:39:13.473612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.583 [2024-12-09 11:39:13.473617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff880, cid 5, qid 0 00:24:21.583 [2024-12-09 11:39:13.473807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.473817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.473821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.473832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.473837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.473841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff880) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.473854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.473858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.473865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.473875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff880, cid 5, qid 0 00:24:21.583 [2024-12-09 11:39:13.474080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.474087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.474090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff880) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.474103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff880, cid 5, qid 0 00:24:21.583 [2024-12-09 11:39:13.474298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.474304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.474308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff880) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.474320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff880, cid 5, qid 0 00:24:21.583 [2024-12-09 11:39:13.474566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.583 [2024-12-09 11:39:13.474572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.583 [2024-12-09 11:39:13.474575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff880) on tqpair=0x159d550 00:24:21.583 [2024-12-09 11:39:13.474595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x159d550) 00:24:21.583 [2024-12-09 11:39:13.474659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.583 [2024-12-09 11:39:13.474670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff880, cid 5, qid 0 00:24:21.583 [2024-12-09 11:39:13.474676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff700, cid 4, qid 0 00:24:21.583 [2024-12-09 11:39:13.474680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ffa00, cid 6, qid 0 00:24:21.583 [2024-12-09 11:39:13.474685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ffb80, cid 7, qid 0 00:24:21.583 [2024-12-09 11:39:13.474922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.583 [2024-12-09 11:39:13.474929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.583 [2024-12-09 11:39:13.474932] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.474936] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=8192, cccid=5 00:24:21.583 [2024-12-09 11:39:13.474940] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff880) on tqpair(0x159d550): expected_datao=0, payload_size=8192 00:24:21.583 [2024-12-09 11:39:13.474944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.475056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.475061] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.583 [2024-12-09 11:39:13.475067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.583 [2024-12-09 11:39:13.475073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.583 [2024-12-09 11:39:13.475076] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475080] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=512, cccid=4 00:24:21.584 [2024-12-09 11:39:13.475084] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ff700) on tqpair(0x159d550): expected_datao=0, payload_size=512 00:24:21.584 [2024-12-09 11:39:13.475088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.584 [2024-12-09 11:39:13.475110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.584 [2024-12-09 11:39:13.475113] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475116] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=512, cccid=6 00:24:21.584 [2024-12-09 11:39:13.475121] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ffa00) on tqpair(0x159d550): expected_datao=0, payload_size=512 00:24:21.584 [2024-12-09 11:39:13.475125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475131] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475135] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.584 [2024-12-09 11:39:13.475148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.584 [2024-12-09 11:39:13.475152] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475155] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x159d550): datao=0, datal=4096, cccid=7 00:24:21.584 [2024-12-09 11:39:13.475160] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ffb80) on tqpair(0x159d550): expected_datao=0, payload_size=4096 00:24:21.584 [2024-12-09 11:39:13.475164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.584 [2024-12-09 11:39:13.475187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.584 [2024-12-09 11:39:13.475191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff880) on tqpair=0x159d550 00:24:21.584 [2024-12-09 11:39:13.475206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.584 [2024-12-09 11:39:13.475212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.584 [2024-12-09 11:39:13.475215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff700) on tqpair=0x159d550 00:24:21.584 [2024-12-09 11:39:13.475229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.584 [2024-12-09 11:39:13.475235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.584 [2024-12-09 11:39:13.475238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ffa00) on tqpair=0x159d550 00:24:21.584 [2024-12-09 11:39:13.475249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.584 [2024-12-09 11:39:13.475255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.584 [2024-12-09 11:39:13.475259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.584 [2024-12-09 11:39:13.475262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ffb80) on tqpair=0x159d550 00:24:21.584 ===================================================== 00:24:21.584 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.584 ===================================================== 00:24:21.584 Controller Capabilities/Features 00:24:21.584 ================================ 00:24:21.584 Vendor ID: 8086 00:24:21.584 Subsystem Vendor ID: 8086 00:24:21.584 Serial Number: SPDK00000000000001 00:24:21.584 Model Number: SPDK bdev Controller 00:24:21.584 Firmware Version: 25.01 00:24:21.584 Recommended Arb Burst: 6 00:24:21.584 IEEE OUI Identifier: e4 d2 5c 00:24:21.584 Multi-path I/O 00:24:21.584 May have multiple subsystem ports: Yes 00:24:21.584 May have multiple controllers: Yes 00:24:21.584 Associated with SR-IOV VF: No 00:24:21.584 Max Data Transfer Size: 131072 00:24:21.584 Max Number of Namespaces: 32 00:24:21.584 Max Number of I/O Queues: 127 00:24:21.584 NVMe Specification Version (VS): 1.3 00:24:21.584 NVMe Specification Version (Identify): 1.3 00:24:21.584 Maximum Queue Entries: 128 00:24:21.584 Contiguous Queues Required: Yes 00:24:21.584 Arbitration Mechanisms Supported 00:24:21.584 Weighted Round Robin: Not Supported 00:24:21.584 Vendor Specific: Not Supported 00:24:21.584 Reset Timeout: 15000 ms 00:24:21.584 Doorbell Stride: 4 bytes 00:24:21.584 NVM Subsystem Reset: Not Supported 00:24:21.584 Command Sets Supported 00:24:21.584 NVM Command Set: Supported 00:24:21.584 Boot Partition: Not Supported 00:24:21.584 Memory Page Size Minimum: 4096 bytes 00:24:21.584 Memory Page Size Maximum: 4096 bytes 00:24:21.584 Persistent Memory Region: Not Supported 00:24:21.584 Optional Asynchronous Events Supported 00:24:21.584 Namespace Attribute Notices: Supported 00:24:21.584 Firmware Activation Notices: Not Supported 00:24:21.584 ANA Change Notices: Not Supported 00:24:21.584 PLE Aggregate Log Change Notices: Not Supported 00:24:21.584 LBA Status Info Alert Notices: Not Supported 00:24:21.584 EGE Aggregate Log Change Notices: Not Supported 00:24:21.584 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.584 Zone Descriptor Change Notices: Not Supported 00:24:21.584 Discovery Log Change Notices: Not Supported 00:24:21.584 Controller Attributes 00:24:21.584 128-bit Host Identifier: Supported 00:24:21.584 Non-Operational Permissive Mode: Not Supported 00:24:21.584 NVM Sets: Not Supported 00:24:21.584 Read Recovery Levels: Not Supported 00:24:21.584 Endurance Groups: Not Supported 00:24:21.584 Predictable Latency Mode: Not Supported 00:24:21.584 Traffic Based Keep ALive: Not Supported 00:24:21.584 Namespace Granularity: Not Supported 00:24:21.584 SQ Associations: Not Supported 00:24:21.584 UUID List: Not Supported 00:24:21.584 Multi-Domain Subsystem: Not Supported 00:24:21.584 Fixed Capacity Management: Not Supported 00:24:21.584 Variable Capacity Management: Not Supported 00:24:21.584 Delete Endurance Group: Not Supported 00:24:21.584 Delete NVM Set: Not Supported 00:24:21.584 Extended LBA Formats Supported: Not Supported 00:24:21.584 Flexible Data Placement Supported: Not Supported 00:24:21.584 00:24:21.584 Controller Memory Buffer Support 00:24:21.584 ================================ 00:24:21.584 Supported: No 00:24:21.584 00:24:21.584 Persistent Memory Region Support 00:24:21.584 ================================ 00:24:21.584 Supported: No 00:24:21.584 00:24:21.584 Admin Command Set Attributes 00:24:21.584 ============================ 00:24:21.584 Security Send/Receive: Not Supported 00:24:21.584 Format NVM: Not Supported 00:24:21.584 Firmware Activate/Download: Not Supported 00:24:21.584 Namespace Management: Not Supported 00:24:21.584 Device Self-Test: Not Supported 00:24:21.584 Directives: Not Supported 00:24:21.584 NVMe-MI: Not Supported 00:24:21.584 Virtualization Management: Not Supported 00:24:21.584 Doorbell Buffer Config: Not Supported 00:24:21.584 Get LBA Status Capability: Not Supported 00:24:21.584 Command & Feature Lockdown Capability: Not Supported 00:24:21.584 Abort Command Limit: 4 00:24:21.584 Async Event Request Limit: 4 00:24:21.584 Number of Firmware Slots: N/A 00:24:21.584 Firmware Slot 1 Read-Only: N/A 00:24:21.584 Firmware Activation Without Reset: N/A 00:24:21.584 Multiple Update Detection Support: N/A 00:24:21.584 Firmware Update Granularity: No Information Provided 00:24:21.584 Per-Namespace SMART Log: No 00:24:21.584 Asymmetric Namespace Access Log Page: Not Supported 00:24:21.584 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:21.584 Command Effects Log Page: Supported 00:24:21.584 Get Log Page Extended Data: Supported 00:24:21.584 Telemetry Log Pages: Not Supported 00:24:21.584 Persistent Event Log Pages: Not Supported 00:24:21.584 Supported Log Pages Log Page: May Support 00:24:21.584 Commands Supported & Effects Log Page: Not Supported 00:24:21.584 Feature Identifiers & Effects Log Page:May Support 00:24:21.584 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.585 Data Area 4 for Telemetry Log: Not Supported 00:24:21.585 Error Log Page Entries Supported: 128 00:24:21.585 Keep Alive: Supported 00:24:21.585 Keep Alive Granularity: 10000 ms 00:24:21.585 00:24:21.585 NVM Command Set Attributes 00:24:21.585 ========================== 00:24:21.585 Submission Queue Entry Size 00:24:21.585 Max: 64 00:24:21.585 Min: 64 00:24:21.585 Completion Queue Entry Size 00:24:21.585 Max: 16 00:24:21.585 Min: 16 00:24:21.585 Number of Namespaces: 32 00:24:21.585 Compare Command: Supported 00:24:21.585 Write Uncorrectable Command: Not Supported 00:24:21.585 Dataset Management Command: Supported 00:24:21.585 Write Zeroes Command: Supported 00:24:21.585 Set Features Save Field: Not Supported 00:24:21.585 Reservations: Supported 00:24:21.585 Timestamp: Not Supported 00:24:21.585 Copy: Supported 00:24:21.585 Volatile Write Cache: Present 00:24:21.585 Atomic Write Unit (Normal): 1 00:24:21.585 Atomic Write Unit (PFail): 1 00:24:21.585 Atomic Compare & Write Unit: 1 00:24:21.585 Fused Compare & Write: Supported 00:24:21.585 Scatter-Gather List 00:24:21.585 SGL Command Set: Supported 00:24:21.585 SGL Keyed: Supported 00:24:21.585 SGL Bit Bucket Descriptor: Not Supported 00:24:21.585 SGL Metadata Pointer: Not Supported 00:24:21.585 Oversized SGL: Not Supported 00:24:21.585 SGL Metadata Address: Not Supported 00:24:21.585 SGL Offset: Supported 00:24:21.585 Transport SGL Data Block: Not Supported 00:24:21.585 Replay Protected Memory Block: Not Supported 00:24:21.585 00:24:21.585 Firmware Slot Information 00:24:21.585 ========================= 00:24:21.585 Active slot: 1 00:24:21.585 Slot 1 Firmware Revision: 25.01 00:24:21.585 00:24:21.585 00:24:21.585 Commands Supported and Effects 00:24:21.585 ============================== 00:24:21.585 Admin Commands 00:24:21.585 -------------- 00:24:21.585 Get Log Page (02h): Supported 00:24:21.585 Identify (06h): Supported 00:24:21.585 Abort (08h): Supported 00:24:21.585 Set Features (09h): Supported 00:24:21.585 Get Features (0Ah): Supported 00:24:21.585 Asynchronous Event Request (0Ch): Supported 00:24:21.585 Keep Alive (18h): Supported 00:24:21.585 I/O Commands 00:24:21.585 ------------ 00:24:21.585 Flush (00h): Supported LBA-Change 00:24:21.585 Write (01h): Supported LBA-Change 00:24:21.585 Read (02h): Supported 00:24:21.585 Compare (05h): Supported 00:24:21.585 Write Zeroes (08h): Supported LBA-Change 00:24:21.585 Dataset Management (09h): Supported LBA-Change 00:24:21.585 Copy (19h): Supported LBA-Change 00:24:21.585 00:24:21.585 Error Log 00:24:21.585 ========= 00:24:21.585 00:24:21.585 Arbitration 00:24:21.585 =========== 00:24:21.585 Arbitration Burst: 1 00:24:21.585 00:24:21.585 Power Management 00:24:21.585 ================ 00:24:21.585 Number of Power States: 1 00:24:21.585 Current Power State: Power State #0 00:24:21.585 Power State #0: 00:24:21.585 Max Power: 0.00 W 00:24:21.585 Non-Operational State: Operational 00:24:21.585 Entry Latency: Not Reported 00:24:21.585 Exit Latency: Not Reported 00:24:21.585 Relative Read Throughput: 0 00:24:21.585 Relative Read Latency: 0 00:24:21.585 Relative Write Throughput: 0 00:24:21.585 Relative Write Latency: 0 00:24:21.585 Idle Power: Not Reported 00:24:21.585 Active Power: Not Reported 00:24:21.585 Non-Operational Permissive Mode: Not Supported 00:24:21.585 00:24:21.585 Health Information 00:24:21.585 ================== 00:24:21.585 Critical Warnings: 00:24:21.585 Available Spare Space: OK 00:24:21.585 Temperature: OK 00:24:21.585 Device Reliability: OK 00:24:21.585 Read Only: No 00:24:21.585 Volatile Memory Backup: OK 00:24:21.585 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:21.585 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:21.585 Available Spare: 0% 00:24:21.585 Available Spare Threshold: 0% 00:24:21.585 Life Percentage Used:[2024-12-09 11:39:13.475360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x159d550) 00:24:21.585 [2024-12-09 11:39:13.475372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.585 [2024-12-09 11:39:13.475383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ffb80, cid 7, qid 0 00:24:21.585 [2024-12-09 11:39:13.475578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.585 [2024-12-09 11:39:13.475584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.585 [2024-12-09 11:39:13.475588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ffb80) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475624] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:21.585 [2024-12-09 11:39:13.475633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff100) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.585 [2024-12-09 11:39:13.475645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff280) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.585 [2024-12-09 11:39:13.475656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff400) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.585 [2024-12-09 11:39:13.475666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.585 [2024-12-09 11:39:13.475679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.585 [2024-12-09 11:39:13.475693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.585 [2024-12-09 11:39:13.475705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.585 [2024-12-09 11:39:13.475863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.585 [2024-12-09 11:39:13.475869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.585 [2024-12-09 11:39:13.475872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.585 [2024-12-09 11:39:13.475883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.585 [2024-12-09 11:39:13.475887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.475890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.586 [2024-12-09 11:39:13.475897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.586 [2024-12-09 11:39:13.475910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.586 [2024-12-09 11:39:13.476119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.586 [2024-12-09 11:39:13.476126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.586 [2024-12-09 11:39:13.476130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.586 [2024-12-09 11:39:13.476138] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:21.586 [2024-12-09 11:39:13.476143] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:21.586 [2024-12-09 11:39:13.476153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.586 [2024-12-09 11:39:13.476167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.586 [2024-12-09 11:39:13.476177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.586 [2024-12-09 11:39:13.476391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.586 [2024-12-09 11:39:13.476397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.586 [2024-12-09 11:39:13.476400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.586 [2024-12-09 11:39:13.476414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.586 [2024-12-09 11:39:13.476431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.586 [2024-12-09 11:39:13.476441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.586 [2024-12-09 11:39:13.476635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.586 [2024-12-09 11:39:13.476642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.586 [2024-12-09 11:39:13.476645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.586 [2024-12-09 11:39:13.476658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.586 [2024-12-09 11:39:13.476673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.586 [2024-12-09 11:39:13.476683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.586 [2024-12-09 11:39:13.476909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.586 [2024-12-09 11:39:13.476916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.586 [2024-12-09 11:39:13.476919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.586 [2024-12-09 11:39:13.476933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.476940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x159d550) 00:24:21.586 [2024-12-09 11:39:13.476947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.586 [2024-12-09 11:39:13.476957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ff580, cid 3, qid 0 00:24:21.586 [2024-12-09 11:39:13.481021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.586 [2024-12-09 11:39:13.481029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.586 [2024-12-09 11:39:13.481033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.586 [2024-12-09 11:39:13.481036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ff580) on tqpair=0x159d550 00:24:21.586 [2024-12-09 11:39:13.481044] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:21.586 0% 00:24:21.586 Data Units Read: 0 00:24:21.586 Data Units Written: 0 00:24:21.586 Host Read Commands: 0 00:24:21.586 Host Write Commands: 0 00:24:21.586 Controller Busy Time: 0 minutes 00:24:21.586 Power Cycles: 0 00:24:21.586 Power On Hours: 0 hours 00:24:21.586 Unsafe Shutdowns: 0 00:24:21.586 Unrecoverable Media Errors: 0 00:24:21.586 Lifetime Error Log Entries: 0 00:24:21.586 Warning Temperature Time: 0 minutes 00:24:21.586 Critical Temperature Time: 0 minutes 00:24:21.586 00:24:21.586 Number of Queues 00:24:21.586 ================ 00:24:21.586 Number of I/O Submission Queues: 127 00:24:21.586 Number of I/O Completion Queues: 127 00:24:21.586 00:24:21.586 Active Namespaces 00:24:21.586 ================= 00:24:21.586 Namespace ID:1 00:24:21.586 Error Recovery Timeout: Unlimited 00:24:21.586 Command Set Identifier: NVM (00h) 00:24:21.586 Deallocate: Supported 00:24:21.586 Deallocated/Unwritten Error: Not Supported 00:24:21.586 Deallocated Read Value: Unknown 00:24:21.586 Deallocate in Write Zeroes: Not Supported 00:24:21.586 Deallocated Guard Field: 0xFFFF 00:24:21.586 Flush: Supported 00:24:21.586 Reservation: Supported 00:24:21.586 Namespace Sharing Capabilities: Multiple Controllers 00:24:21.586 Size (in LBAs): 131072 (0GiB) 00:24:21.586 Capacity (in LBAs): 131072 (0GiB) 00:24:21.586 Utilization (in LBAs): 131072 (0GiB) 00:24:21.586 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:21.586 EUI64: ABCDEF0123456789 00:24:21.586 UUID: 9bdc468c-abe1-434c-9d11-00a64199a24e 00:24:21.586 Thin Provisioning: Not Supported 00:24:21.586 Per-NS Atomic Units: Yes 00:24:21.586 Atomic Boundary Size (Normal): 0 00:24:21.586 Atomic Boundary Size (PFail): 0 00:24:21.586 Atomic Boundary Offset: 0 00:24:21.586 Maximum Single Source Range Length: 65535 00:24:21.586 Maximum Copy Length: 65535 00:24:21.586 Maximum Source Range Count: 1 00:24:21.586 NGUID/EUI64 Never Reused: No 00:24:21.586 Namespace Write Protected: No 00:24:21.586 Number of LBA Formats: 1 00:24:21.586 Current LBA Format: LBA Format #00 00:24:21.586 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:21.586 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.586 rmmod nvme_tcp 00:24:21.586 rmmod nvme_fabrics 00:24:21.586 rmmod nvme_keyring 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3622071 ']' 00:24:21.586 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3622071 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3622071 ']' 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3622071 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3622071 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3622071' 00:24:21.587 killing process with pid 3622071 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3622071 00:24:21.587 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3622071 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.846 11:39:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.761 00:24:23.761 real 0m11.773s 00:24:23.761 user 0m8.470s 00:24:23.761 sys 0m6.276s 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.761 ************************************ 00:24:23.761 END TEST nvmf_identify 00:24:23.761 ************************************ 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.761 11:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.023 ************************************ 00:24:24.023 START TEST nvmf_perf 00:24:24.023 ************************************ 00:24:24.023 11:39:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:24.023 * Looking for test storage... 00:24:24.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.023 --rc genhtml_branch_coverage=1 00:24:24.023 --rc genhtml_function_coverage=1 00:24:24.023 --rc genhtml_legend=1 00:24:24.023 --rc geninfo_all_blocks=1 00:24:24.023 --rc geninfo_unexecuted_blocks=1 00:24:24.023 00:24:24.023 ' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.023 --rc genhtml_branch_coverage=1 00:24:24.023 --rc genhtml_function_coverage=1 00:24:24.023 --rc genhtml_legend=1 00:24:24.023 --rc geninfo_all_blocks=1 00:24:24.023 --rc geninfo_unexecuted_blocks=1 00:24:24.023 00:24:24.023 ' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.023 --rc genhtml_branch_coverage=1 00:24:24.023 --rc genhtml_function_coverage=1 00:24:24.023 --rc genhtml_legend=1 00:24:24.023 --rc geninfo_all_blocks=1 00:24:24.023 --rc geninfo_unexecuted_blocks=1 00:24:24.023 00:24:24.023 ' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.023 --rc genhtml_branch_coverage=1 00:24:24.023 --rc genhtml_function_coverage=1 00:24:24.023 --rc genhtml_legend=1 00:24:24.023 --rc geninfo_all_blocks=1 00:24:24.023 --rc geninfo_unexecuted_blocks=1 00:24:24.023 00:24:24.023 ' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.023 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.024 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.285 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.285 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.285 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.285 11:39:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:32.437 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:32.437 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:32.437 Found net devices under 0000:31:00.0: cvl_0_0 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:32.437 Found net devices under 0000:31:00.1: cvl_0_1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.437 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:24:32.438 00:24:32.438 --- 10.0.0.2 ping statistics --- 00:24:32.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.438 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:24:32.438 00:24:32.438 --- 10.0.0.1 ping statistics --- 00:24:32.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.438 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3626491 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3626491 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3626491 ']' 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.438 11:39:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.438 [2024-12-09 11:39:23.545671] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:32.438 [2024-12-09 11:39:23.545732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.438 [2024-12-09 11:39:23.625270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.438 [2024-12-09 11:39:23.661015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.438 [2024-12-09 11:39:23.661049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.438 [2024-12-09 11:39:23.661057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.438 [2024-12-09 11:39:23.661064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.438 [2024-12-09 11:39:23.661070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.438 [2024-12-09 11:39:23.662598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.438 [2024-12-09 11:39:23.662710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.438 [2024-12-09 11:39:23.662864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.438 [2024-12-09 11:39:23.662864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:32.438 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:33.017 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:33.018 11:39:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:33.018 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:33.018 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:33.282 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:33.282 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:33.282 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:33.282 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:33.282 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.282 [2024-12-09 11:39:25.416163] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.543 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.543 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.543 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.804 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.804 11:39:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:34.065 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.065 [2024-12-09 11:39:26.158892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.065 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.326 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:34.326 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:34.326 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:34.326 11:39:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:35.711 Initializing NVMe Controllers 00:24:35.711 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:35.711 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:35.711 Initialization complete. Launching workers. 00:24:35.711 ======================================================== 00:24:35.711 Latency(us) 00:24:35.711 Device Information : IOPS MiB/s Average min max 00:24:35.711 PCIE (0000:65:00.0) NSID 1 from core 0: 79059.27 308.83 404.07 13.29 4819.52 00:24:35.711 ======================================================== 00:24:35.711 Total : 79059.27 308.83 404.07 13.29 4819.52 00:24:35.711 00:24:35.711 11:39:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.094 Initializing NVMe Controllers 00:24:37.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.095 Initialization complete. Launching workers. 00:24:37.095 ======================================================== 00:24:37.095 Latency(us) 00:24:37.095 Device Information : IOPS MiB/s Average min max 00:24:37.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.93 0.43 9265.37 192.12 46231.36 00:24:37.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.97 0.21 19121.07 6981.52 47949.93 00:24:37.095 ======================================================== 00:24:37.095 Total : 162.90 0.64 12530.45 192.12 47949.93 00:24:37.095 00:24:37.095 11:39:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.480 Initializing NVMe Controllers 00:24:38.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.480 Initialization complete. Launching workers. 00:24:38.480 ======================================================== 00:24:38.480 Latency(us) 00:24:38.480 Device Information : IOPS MiB/s Average min max 00:24:38.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10468.88 40.89 3056.90 556.15 6526.97 00:24:38.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3802.69 14.85 8428.95 6974.87 17795.09 00:24:38.480 ======================================================== 00:24:38.480 Total : 14271.57 55.75 4488.29 556.15 17795.09 00:24:38.480 00:24:38.480 11:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:38.480 11:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:38.480 11:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.025 Initializing NVMe Controllers 00:24:41.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.025 Controller IO queue size 128, less than required. 00:24:41.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.025 Controller IO queue size 128, less than required. 00:24:41.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.025 Initialization complete. Launching workers. 00:24:41.025 ======================================================== 00:24:41.025 Latency(us) 00:24:41.025 Device Information : IOPS MiB/s Average min max 00:24:41.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2027.93 506.98 64002.31 43889.35 105644.16 00:24:41.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.24 150.56 225482.91 78447.28 371100.11 00:24:41.025 ======================================================== 00:24:41.025 Total : 2630.17 657.54 100976.99 43889.35 371100.11 00:24:41.025 00:24:41.025 11:39:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:41.025 No valid NVMe controllers or AIO or URING devices found 00:24:41.025 Initializing NVMe Controllers 00:24:41.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.026 Controller IO queue size 128, less than required. 00:24:41.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.026 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:41.026 Controller IO queue size 128, less than required. 00:24:41.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.026 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:41.026 WARNING: Some requested NVMe devices were skipped 00:24:41.286 11:39:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:43.828 Initializing NVMe Controllers 00:24:43.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.828 Controller IO queue size 128, less than required. 00:24:43.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.828 Controller IO queue size 128, less than required. 00:24:43.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.828 Initialization complete. Launching workers. 00:24:43.828 00:24:43.828 ==================== 00:24:43.828 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:43.828 TCP transport: 00:24:43.828 polls: 23679 00:24:43.828 idle_polls: 14411 00:24:43.828 sock_completions: 9268 00:24:43.828 nvme_completions: 6909 00:24:43.828 submitted_requests: 10464 00:24:43.828 queued_requests: 1 00:24:43.828 00:24:43.828 ==================== 00:24:43.828 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:43.828 TCP transport: 00:24:43.828 polls: 19697 00:24:43.828 idle_polls: 9692 00:24:43.828 sock_completions: 10005 00:24:43.828 nvme_completions: 6653 00:24:43.828 submitted_requests: 9910 00:24:43.828 queued_requests: 1 00:24:43.828 ======================================================== 00:24:43.828 Latency(us) 00:24:43.828 Device Information : IOPS MiB/s Average min max 00:24:43.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1724.27 431.07 76024.72 39411.12 122822.93 00:24:43.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1660.37 415.09 77764.19 36885.34 127654.09 00:24:43.828 ======================================================== 00:24:43.828 Total : 3384.63 846.16 76878.03 36885.34 127654.09 00:24:43.828 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.828 rmmod nvme_tcp 00:24:43.828 rmmod nvme_fabrics 00:24:43.828 rmmod nvme_keyring 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3626491 ']' 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3626491 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3626491 ']' 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3626491 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.828 11:39:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3626491 00:24:44.088 11:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.088 11:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.088 11:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3626491' 00:24:44.088 killing process with pid 3626491 00:24:44.088 11:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3626491 00:24:44.088 11:39:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3626491 00:24:45.998 11:39:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.998 11:39:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.998 11:39:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.998 11:39:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.998 11:39:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.542 00:24:48.542 real 0m24.133s 00:24:48.542 user 0m58.704s 00:24:48.542 sys 0m8.371s 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:48.542 ************************************ 00:24:48.542 END TEST nvmf_perf 00:24:48.542 ************************************ 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.542 ************************************ 00:24:48.542 START TEST nvmf_fio_host 00:24:48.542 ************************************ 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.542 * Looking for test storage... 00:24:48.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.542 --rc genhtml_branch_coverage=1 00:24:48.542 --rc genhtml_function_coverage=1 00:24:48.542 --rc genhtml_legend=1 00:24:48.542 --rc geninfo_all_blocks=1 00:24:48.542 --rc geninfo_unexecuted_blocks=1 00:24:48.542 00:24:48.542 ' 00:24:48.542 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.543 --rc genhtml_branch_coverage=1 00:24:48.543 --rc genhtml_function_coverage=1 00:24:48.543 --rc genhtml_legend=1 00:24:48.543 --rc geninfo_all_blocks=1 00:24:48.543 --rc geninfo_unexecuted_blocks=1 00:24:48.543 00:24:48.543 ' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.543 --rc genhtml_branch_coverage=1 00:24:48.543 --rc genhtml_function_coverage=1 00:24:48.543 --rc genhtml_legend=1 00:24:48.543 --rc geninfo_all_blocks=1 00:24:48.543 --rc geninfo_unexecuted_blocks=1 00:24:48.543 00:24:48.543 ' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.543 --rc genhtml_branch_coverage=1 00:24:48.543 --rc genhtml_function_coverage=1 00:24:48.543 --rc genhtml_legend=1 00:24:48.543 --rc geninfo_all_blocks=1 00:24:48.543 --rc geninfo_unexecuted_blocks=1 00:24:48.543 00:24:48.543 ' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.543 11:39:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.688 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:56.689 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:56.689 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:56.689 Found net devices under 0000:31:00.0: cvl_0_0 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:56.689 Found net devices under 0000:31:00.1: cvl_0_1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:24:56.689 00:24:56.689 --- 10.0.0.2 ping statistics --- 00:24:56.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.689 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:56.689 00:24:56.689 --- 10.0.0.1 ping statistics --- 00:24:56.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.689 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3633617 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3633617 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3633617 ']' 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.689 11:39:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.689 [2024-12-09 11:39:47.902451] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:24:56.689 [2024-12-09 11:39:47.902519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.689 [2024-12-09 11:39:47.986879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.689 [2024-12-09 11:39:48.028510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.690 [2024-12-09 11:39:48.028545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.690 [2024-12-09 11:39:48.028553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.690 [2024-12-09 11:39:48.028560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.690 [2024-12-09 11:39:48.028566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.690 [2024-12-09 11:39:48.030087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.690 [2024-12-09 11:39:48.030214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.690 [2024-12-09 11:39:48.030357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.690 [2024-12-09 11:39:48.030357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.690 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.690 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:56.690 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.950 [2024-12-09 11:39:48.858612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.950 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:56.950 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.950 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.950 11:39:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:56.950 Malloc1 00:24:57.211 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.211 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.472 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.733 [2024-12-09 11:39:49.638748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:57.733 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:58.016 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:58.016 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:58.016 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.016 11:39:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:58.279 fio-3.35 00:24:58.279 Starting 1 thread 00:25:00.821 00:25:00.821 test: (groupid=0, jobs=1): err= 0: pid=3634360: Mon Dec 9 11:39:52 2024 00:25:00.821 read: IOPS=12.8k, BW=50.1MiB/s (52.6MB/s)(103MiB/2046msec) 00:25:00.821 slat (usec): min=2, max=281, avg= 2.15, stdev= 2.42 00:25:00.821 clat (usec): min=3636, max=50016, avg=5461.71, stdev=2223.89 00:25:00.821 lat (usec): min=3670, max=50018, avg=5463.86, stdev=2223.89 00:25:00.821 clat percentiles (usec): 00:25:00.821 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:25:00.821 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 00:25:00.821 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 5997], 00:25:00.821 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[48497], 99.95th=[49021], 00:25:00.821 | 99.99th=[50070] 00:25:00.821 bw ( KiB/s): min=50768, max=52960, per=100.00%, avg=52382.00, stdev=1076.50, samples=4 00:25:00.821 iops : min=12692, max=13240, avg=13095.50, stdev=269.12, samples=4 00:25:00.821 write: IOPS=12.8k, BW=50.1MiB/s (52.6MB/s)(103MiB/2046msec); 0 zone resets 00:25:00.821 slat (usec): min=2, max=269, avg= 2.22, stdev= 1.83 00:25:00.821 clat (usec): min=2888, max=49478, avg=4439.23, stdev=2023.31 00:25:00.821 lat (usec): min=2905, max=49480, avg=4441.45, stdev=2023.33 00:25:00.821 clat percentiles (usec): 00:25:00.821 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4113], 00:25:00.821 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:25:00.821 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:25:00.821 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[47973], 99.95th=[48497], 00:25:00.821 | 99.99th=[49546] 00:25:00.821 bw ( KiB/s): min=51192, max=53032, per=100.00%, avg=52420.00, stdev=831.53, samples=4 00:25:00.821 iops : min=12798, max=13258, avg=13105.00, stdev=207.88, samples=4 00:25:00.821 lat (msec) : 4=6.27%, 10=93.49%, 50=0.24%, 100=0.01% 00:25:00.821 cpu : usr=74.72%, sys=24.01%, ctx=55, majf=0, minf=16 00:25:00.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:00.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.821 issued rwts: total=26261,26267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.821 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.821 00:25:00.821 Run status group 0 (all jobs): 00:25:00.821 READ: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=103MiB (108MB), run=2046-2046msec 00:25:00.821 WRITE: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=103MiB (108MB), run=2046-2046msec 00:25:00.821 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:00.821 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:00.822 11:39:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.082 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:01.082 fio-3.35 00:25:01.082 Starting 1 thread 00:25:02.992 [2024-12-09 11:39:54.698247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527210 is same with the state(6) to be set 00:25:02.992 [2024-12-09 11:39:54.698309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527210 is same with the state(6) to be set 00:25:03.563 00:25:03.563 test: (groupid=0, jobs=1): err= 0: pid=3634977: Mon Dec 9 11:39:55 2024 00:25:03.563 read: IOPS=9310, BW=145MiB/s (153MB/s)(292MiB/2007msec) 00:25:03.563 slat (usec): min=3, max=111, avg= 3.65, stdev= 1.63 00:25:03.563 clat (usec): min=1924, max=15804, avg=8271.17, stdev=1959.01 00:25:03.563 lat (usec): min=1927, max=15808, avg=8274.82, stdev=1959.11 00:25:03.563 clat percentiles (usec): 00:25:03.563 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6521], 00:25:03.563 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8848], 00:25:03.563 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:25:03.563 | 99.00th=[13435], 99.50th=[14484], 99.90th=[15270], 99.95th=[15401], 00:25:03.563 | 99.99th=[15795] 00:25:03.563 bw ( KiB/s): min=65088, max=88192, per=49.21%, avg=73312.00, stdev=10402.51, samples=4 00:25:03.563 iops : min= 4068, max= 5512, avg=4582.00, stdev=650.16, samples=4 00:25:03.563 write: IOPS=5537, BW=86.5MiB/s (90.7MB/s)(150MiB/1731msec); 0 zone resets 00:25:03.563 slat (usec): min=39, max=451, avg=41.07, stdev= 8.43 00:25:03.563 clat (usec): min=3373, max=16468, avg=9557.57, stdev=1675.25 00:25:03.563 lat (usec): min=3413, max=16508, avg=9598.64, stdev=1676.76 00:25:03.563 clat percentiles (usec): 00:25:03.563 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8160], 00:25:03.563 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:03.563 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11600], 95.00th=[12780], 00:25:03.563 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15533], 99.95th=[15664], 00:25:03.563 | 99.99th=[16450] 00:25:03.563 bw ( KiB/s): min=68192, max=91552, per=86.01%, avg=76208.00, stdev=10578.88, samples=4 00:25:03.563 iops : min= 4262, max= 5722, avg=4763.00, stdev=661.18, samples=4 00:25:03.563 lat (msec) : 2=0.01%, 4=0.51%, 10=72.67%, 20=26.81% 00:25:03.563 cpu : usr=84.90%, sys=13.71%, ctx=16, majf=0, minf=42 00:25:03.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:03.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:03.563 issued rwts: total=18686,9586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:03.563 00:25:03.563 Run status group 0 (all jobs): 00:25:03.563 READ: bw=145MiB/s (153MB/s), 145MiB/s-145MiB/s (153MB/s-153MB/s), io=292MiB (306MB), run=2007-2007msec 00:25:03.563 WRITE: bw=86.5MiB/s (90.7MB/s), 86.5MiB/s-86.5MiB/s (90.7MB/s-90.7MB/s), io=150MiB (157MB), run=1731-1731msec 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.563 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.563 rmmod nvme_tcp 00:25:03.563 rmmod nvme_fabrics 00:25:03.824 rmmod nvme_keyring 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3633617 ']' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3633617 ']' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3633617' 00:25:03.824 killing process with pid 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3633617 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.824 11:39:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.410 00:25:06.410 real 0m17.868s 00:25:06.410 user 1m4.161s 00:25:06.410 sys 0m7.586s 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.410 ************************************ 00:25:06.410 END TEST nvmf_fio_host 00:25:06.410 ************************************ 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.410 ************************************ 00:25:06.410 START TEST nvmf_failover 00:25:06.410 ************************************ 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.410 * Looking for test storage... 00:25:06.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.410 --rc genhtml_branch_coverage=1 00:25:06.410 --rc genhtml_function_coverage=1 00:25:06.410 --rc genhtml_legend=1 00:25:06.410 --rc geninfo_all_blocks=1 00:25:06.410 --rc geninfo_unexecuted_blocks=1 00:25:06.410 00:25:06.410 ' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.410 --rc genhtml_branch_coverage=1 00:25:06.410 --rc genhtml_function_coverage=1 00:25:06.410 --rc genhtml_legend=1 00:25:06.410 --rc geninfo_all_blocks=1 00:25:06.410 --rc geninfo_unexecuted_blocks=1 00:25:06.410 00:25:06.410 ' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.410 --rc genhtml_branch_coverage=1 00:25:06.410 --rc genhtml_function_coverage=1 00:25:06.410 --rc genhtml_legend=1 00:25:06.410 --rc geninfo_all_blocks=1 00:25:06.410 --rc geninfo_unexecuted_blocks=1 00:25:06.410 00:25:06.410 ' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.410 --rc genhtml_branch_coverage=1 00:25:06.410 --rc genhtml_function_coverage=1 00:25:06.410 --rc genhtml_legend=1 00:25:06.410 --rc geninfo_all_blocks=1 00:25:06.410 --rc geninfo_unexecuted_blocks=1 00:25:06.410 00:25:06.410 ' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.410 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.411 11:39:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:14.557 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:14.557 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:14.557 Found net devices under 0000:31:00.0: cvl_0_0 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:14.557 Found net devices under 0000:31:00.1: cvl_0_1 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.557 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:25:14.558 00:25:14.558 --- 10.0.0.2 ping statistics --- 00:25:14.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.558 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:25:14.558 00:25:14.558 --- 10.0.0.1 ping statistics --- 00:25:14.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.558 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3639704 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3639704 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3639704 ']' 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.558 11:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 [2024-12-09 11:40:05.791932] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:25:14.558 [2024-12-09 11:40:05.791998] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.558 [2024-12-09 11:40:05.893402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.558 [2024-12-09 11:40:05.944720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.558 [2024-12-09 11:40:05.944772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.558 [2024-12-09 11:40:05.944781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.558 [2024-12-09 11:40:05.944788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.558 [2024-12-09 11:40:05.944795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.558 [2024-12-09 11:40:05.946875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.558 [2024-12-09 11:40:05.947070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.558 [2024-12-09 11:40:05.947113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.558 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.819 [2024-12-09 11:40:06.784308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.819 11:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.080 Malloc0 00:25:15.081 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.081 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.342 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.603 [2024-12-09 11:40:07.517456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.603 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.603 [2024-12-09 11:40:07.689936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.603 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:15.864 [2024-12-09 11:40:07.870488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3640095 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3640095 /var/tmp/bdevperf.sock 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3640095 ']' 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.864 11:40:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.815 11:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.815 11:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.815 11:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.077 NVMe0n1 00:25:17.077 11:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.338 00:25:17.338 11:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3640408 00:25:17.338 11:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.338 11:40:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.279 11:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.539 11:40:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:21.839 11:40:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:21.839 00:25:21.839 11:40:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.099 [2024-12-09 11:40:14.095178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.099 [2024-12-09 11:40:14.095273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.100 [2024-12-09 11:40:14.095278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.100 [2024-12-09 11:40:14.095283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.100 [2024-12-09 11:40:14.095287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcfc0 is same with the state(6) to be set 00:25:22.100 11:40:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:25.438 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.438 [2024-12-09 11:40:17.280878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.438 11:40:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:26.386 11:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:26.386 [2024-12-09 11:40:18.475023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.386 [2024-12-09 11:40:18.475239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 [2024-12-09 11:40:18.475323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482620 is same with the state(6) to be set 00:25:26.387 11:40:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3640408 00:25:32.989 { 00:25:32.989 "results": [ 00:25:32.989 { 00:25:32.989 "job": "NVMe0n1", 00:25:32.989 "core_mask": "0x1", 00:25:32.989 "workload": "verify", 00:25:32.989 "status": "finished", 00:25:32.989 "verify_range": { 00:25:32.989 "start": 0, 00:25:32.989 "length": 16384 00:25:32.989 }, 00:25:32.989 "queue_depth": 128, 00:25:32.989 "io_size": 4096, 00:25:32.989 "runtime": 15.006566, 00:25:32.989 "iops": 10141.560700829225, 00:25:32.989 "mibps": 39.61547148761416, 00:25:32.989 "io_failed": 8157, 00:25:32.989 "io_timeout": 0, 00:25:32.989 "avg_latency_us": 11943.818752414034, 00:25:32.989 "min_latency_us": 785.0666666666667, 00:25:32.989 "max_latency_us": 30583.466666666667 00:25:32.989 } 00:25:32.989 ], 00:25:32.989 "core_count": 1 00:25:32.989 } 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3640095 ']' 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3640095' 00:25:32.989 killing process with pid 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3640095 00:25:32.989 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.989 [2024-12-09 11:40:07.963097] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:25:32.989 [2024-12-09 11:40:07.963156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640095 ] 00:25:32.989 [2024-12-09 11:40:08.034778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.989 [2024-12-09 11:40:08.070637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.989 Running I/O for 15 seconds... 00:25:32.989 10225.00 IOPS, 39.94 MiB/s [2024-12-09T10:40:25.151Z] [2024-12-09 11:40:10.444559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.989 [2024-12-09 11:40:10.444602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.989 [2024-12-09 11:40:10.444919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.989 [2024-12-09 11:40:10.444926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.444936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.444944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.444953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.444961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.444970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.444977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.444987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.444995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.990 [2024-12-09 11:40:10.445421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.990 [2024-12-09 11:40:10.445430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.991 [2024-12-09 11:40:10.445921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.991 [2024-12-09 11:40:10.445929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.445938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.445945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.445955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.445962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.445988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.445995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.992 [2024-12-09 11:40:10.446231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.992 [2024-12-09 11:40:10.446259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88880 len:8 PRP1 0x0 PRP2 0x0 00:25:32.992 [2024-12-09 11:40:10.446268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.992 [2024-12-09 11:40:10.446316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.992 [2024-12-09 11:40:10.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.992 [2024-12-09 11:40:10.446349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.992 [2024-12-09 11:40:10.446365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc930 is same with the state(6) to be set 00:25:32.992 [2024-12-09 11:40:10.446585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.992 [2024-12-09 11:40:10.446592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.992 [2024-12-09 11:40:10.446599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88888 len:8 PRP1 0x0 PRP2 0x0 00:25:32.992 [2024-12-09 11:40:10.446607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.992 [2024-12-09 11:40:10.446623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.992 [2024-12-09 11:40:10.446629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88896 len:8 PRP1 0x0 PRP2 0x0 00:25:32.992 [2024-12-09 11:40:10.446636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.992 [2024-12-09 11:40:10.446652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.992 [2024-12-09 11:40:10.446659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88904 len:8 PRP1 0x0 PRP2 0x0 00:25:32.992 [2024-12-09 11:40:10.446666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.992 [2024-12-09 11:40:10.446674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88912 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88920 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88928 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88936 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88968 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88984 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88992 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.446978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.446984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89000 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.446991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.446999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.447005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.447018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88000 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.447026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.447034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.447040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.447046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88008 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.447053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.447061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.447066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.447072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.447080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.447088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.447094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.447100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.447107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.993 [2024-12-09 11:40:10.447115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.993 [2024-12-09 11:40:10.447121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.993 [2024-12-09 11:40:10.447127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:25:32.993 [2024-12-09 11:40:10.447134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.447344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.447351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.447359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.447365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89008 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:25:32.994 [2024-12-09 11:40:10.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.994 [2024-12-09 11:40:10.457834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.994 [2024-12-09 11:40:10.457839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.994 [2024-12-09 11:40:10.457846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.457865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.457871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.457878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.457893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.457899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.457905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.457920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.457925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.457931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.457946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.457952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.457958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.457973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.457978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.457985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.457992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88232 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.995 [2024-12-09 11:40:10.458150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.995 [2024-12-09 11:40:10.458156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:25:32.995 [2024-12-09 11:40:10.458164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.995 [2024-12-09 11:40:10.458171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.996 [2024-12-09 11:40:10.458602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.996 [2024-12-09 11:40:10.458607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.996 [2024-12-09 11:40:10.458613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:25:32.996 [2024-12-09 11:40:10.458621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88472 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.458977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.458983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.458990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.458997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.459004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.459013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.459020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.459027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.459035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.459041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.997 [2024-12-09 11:40:10.459047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:25:32.997 [2024-12-09 11:40:10.459054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.997 [2024-12-09 11:40:10.459062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.997 [2024-12-09 11:40:10.459067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.459374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88600 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.459382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.459389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.459395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.467070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88608 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.467112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.467119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.467128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.467137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.467146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.467153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.467161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88624 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.467170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.467179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.467186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.467192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.467200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.467208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.467213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.998 [2024-12-09 11:40:10.467219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:25:32.998 [2024-12-09 11:40:10.467227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.998 [2024-12-09 11:40:10.467234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.998 [2024-12-09 11:40:10.467240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88672 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88680 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88688 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88696 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88704 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88712 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88720 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88728 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88752 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.999 [2024-12-09 11:40:10.467667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.999 [2024-12-09 11:40:10.467674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.999 [2024-12-09 11:40:10.467680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:25:32.999 [2024-12-09 11:40:10.467688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88808 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88816 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88824 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88832 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88840 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88848 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88856 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.467974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88864 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.467989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.467994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.468000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88872 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.468008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.468032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.000 [2024-12-09 11:40:10.468038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.000 [2024-12-09 11:40:10.468044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88880 len:8 PRP1 0x0 PRP2 0x0 00:25:33.000 [2024-12-09 11:40:10.468051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:10.468095] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.000 [2024-12-09 11:40:10.468105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:33.000 [2024-12-09 11:40:10.468156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cc930 (9): Bad file descriptor 00:25:33.000 [2024-12-09 11:40:10.471730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:33.000 [2024-12-09 11:40:10.582182] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:33.000 9694.00 IOPS, 37.87 MiB/s [2024-12-09T10:40:25.162Z] 9970.33 IOPS, 38.95 MiB/s [2024-12-09T10:40:25.162Z] 10051.00 IOPS, 39.26 MiB/s [2024-12-09T10:40:25.162Z] [2024-12-09 11:40:14.097582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-12-09 11:40:14.097626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:14.097642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-12-09 11:40:14.097651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.000 [2024-12-09 11:40:14.097660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.000 [2024-12-09 11:40:14.097668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.097983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.098000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.098018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.001 [2024-12-09 11:40:14.098026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.001 [2024-12-09 11:40:14.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.002 [2024-12-09 11:40:14.098477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.002 [2024-12-09 11:40:14.098513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.002 [2024-12-09 11:40:14.098523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-12-09 11:40:14.098530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-12-09 11:40:14.098547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-12-09 11:40:14.098564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-12-09 11:40:14.098581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.003 [2024-12-09 11:40:14.098598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.098989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.003 [2024-12-09 11:40:14.098996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.003 [2024-12-09 11:40:14.099006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.004 [2024-12-09 11:40:14.099585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.004 [2024-12-09 11:40:14.099593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.005 [2024-12-09 11:40:14.099697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.099731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22696 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.005 [2024-12-09 11:40:14.099786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.005 [2024-12-09 11:40:14.099803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.005 [2024-12-09 11:40:14.099818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.005 [2024-12-09 11:40:14.099834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.099842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc930 is same with the state(6) to be set 00:25:33.005 [2024-12-09 11:40:14.100104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22704 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22712 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22728 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22736 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22744 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22760 len:8 PRP1 0x0 PRP2 0x0 00:25:33.005 [2024-12-09 11:40:14.100324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.005 [2024-12-09 11:40:14.100332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.005 [2024-12-09 11:40:14.100338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.005 [2024-12-09 11:40:14.100344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21768 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21776 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21784 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21800 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21808 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21816 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.100617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.100631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.100639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.110848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.110878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21832 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.110890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.110904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.110910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.110916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21840 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.110924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.110932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.110938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.110945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21848 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.110953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.110960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.110966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.110972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.110980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.110988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.110994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.006 [2024-12-09 11:40:14.111000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21864 len:8 PRP1 0x0 PRP2 0x0 00:25:33.006 [2024-12-09 11:40:14.111007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.006 [2024-12-09 11:40:14.111023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.006 [2024-12-09 11:40:14.111034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21872 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21880 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21896 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21904 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21912 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21928 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21936 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21944 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21960 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21968 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.007 [2024-12-09 11:40:14.111388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.007 [2024-12-09 11:40:14.111394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.007 [2024-12-09 11:40:14.111401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21976 len:8 PRP1 0x0 PRP2 0x0 00:25:33.007 [2024-12-09 11:40:14.111408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21992 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22000 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22064 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22072 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22088 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22096 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22104 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22128 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22136 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22152 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.008 [2024-12-09 11:40:14.111860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.008 [2024-12-09 11:40:14.111866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22168 len:8 PRP1 0x0 PRP2 0x0 00:25:33.008 [2024-12-09 11:40:14.111873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.008 [2024-12-09 11:40:14.111883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.111889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.111895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.111902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.111910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.111916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.111922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22184 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.111929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.111937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.111943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22008 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.111956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.111964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.111970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.111976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.111984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.111991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.111997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22024 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22032 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22040 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22056 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22192 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22200 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22216 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22224 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.009 [2024-12-09 11:40:14.112354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.009 [2024-12-09 11:40:14.112360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.009 [2024-12-09 11:40:14.112366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22256 len:8 PRP1 0x0 PRP2 0x0 00:25:33.009 [2024-12-09 11:40:14.112374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22264 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22280 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.112593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.112601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.112607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.112613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22328 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22344 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22352 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22360 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.010 [2024-12-09 11:40:14.120281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.010 [2024-12-09 11:40:14.120287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22376 len:8 PRP1 0x0 PRP2 0x0 00:25:33.010 [2024-12-09 11:40:14.120295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.010 [2024-12-09 11:40:14.120303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22384 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22392 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22408 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22416 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22424 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22440 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22448 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22456 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22472 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.011 [2024-12-09 11:40:14.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22480 len:8 PRP1 0x0 PRP2 0x0 00:25:33.011 [2024-12-09 11:40:14.120651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.011 [2024-12-09 11:40:14.120659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.011 [2024-12-09 11:40:14.120665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22488 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22504 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22512 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22520 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22536 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22544 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22552 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22568 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.120974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22576 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.120982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.120990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.120995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.012 [2024-12-09 11:40:14.121001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22584 len:8 PRP1 0x0 PRP2 0x0 00:25:33.012 [2024-12-09 11:40:14.121008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.012 [2024-12-09 11:40:14.121025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.012 [2024-12-09 11:40:14.121031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22600 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22608 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22616 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22632 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22640 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22648 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22664 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22672 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22680 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.013 [2024-12-09 11:40:14.121388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.013 [2024-12-09 11:40:14.121394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22696 len:8 PRP1 0x0 PRP2 0x0 00:25:33.013 [2024-12-09 11:40:14.121401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.013 [2024-12-09 11:40:14.121443] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:33.013 [2024-12-09 11:40:14.121453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:33.013 [2024-12-09 11:40:14.121498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cc930 (9): Bad file descriptor 00:25:33.013 [2024-12-09 11:40:14.125069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:33.013 [2024-12-09 11:40:14.195187] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:33.014 9894.60 IOPS, 38.65 MiB/s [2024-12-09T10:40:25.176Z] 9948.33 IOPS, 38.86 MiB/s [2024-12-09T10:40:25.176Z] 9988.86 IOPS, 39.02 MiB/s [2024-12-09T10:40:25.176Z] 10018.12 IOPS, 39.13 MiB/s [2024-12-09T10:40:25.176Z] 10042.56 IOPS, 39.23 MiB/s [2024-12-09T10:40:25.176Z] [2024-12-09 11:40:18.478105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.014 [2024-12-09 11:40:18.478138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.014 [2024-12-09 11:40:18.478157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.014 [2024-12-09 11:40:18.478173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.014 [2024-12-09 11:40:18.478189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc930 is same with the state(6) to be set 00:25:33.014 [2024-12-09 11:40:18.478248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.014 [2024-12-09 11:40:18.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.014 [2024-12-09 11:40:18.478480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.478989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.478997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.479019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.479028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.479036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.479052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.479069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.015 [2024-12-09 11:40:18.479078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.015 [2024-12-09 11:40:18.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.016 [2024-12-09 11:40:18.479102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.016 [2024-12-09 11:40:18.479120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.016 [2024-12-09 11:40:18.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.016 [2024-12-09 11:40:18.479270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.016 [2024-12-09 11:40:18.479514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.016 [2024-12-09 11:40:18.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.017 [2024-12-09 11:40:18.479975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.017 [2024-12-09 11:40:18.479984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.018 [2024-12-09 11:40:18.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.018 [2024-12-09 11:40:18.480424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.018 [2024-12-09 11:40:18.480431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1384 len:8 PRP1 0x0 PRP2 0x0 00:25:33.018 [2024-12-09 11:40:18.480439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.018 [2024-12-09 11:40:18.480479] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:33.018 [2024-12-09 11:40:18.480489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:33.018 [2024-12-09 11:40:18.484064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:33.018 [2024-12-09 11:40:18.484089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cc930 (9): Bad file descriptor 00:25:33.018 [2024-12-09 11:40:18.507457] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:33.018 10064.50 IOPS, 39.31 MiB/s [2024-12-09T10:40:25.181Z] 10085.73 IOPS, 39.40 MiB/s [2024-12-09T10:40:25.181Z] 10113.33 IOPS, 39.51 MiB/s [2024-12-09T10:40:25.181Z] 10126.54 IOPS, 39.56 MiB/s [2024-12-09T10:40:25.181Z] 10138.00 IOPS, 39.60 MiB/s 00:25:33.019 Latency(us) 00:25:33.019 [2024-12-09T10:40:25.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.019 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.019 Verification LBA range: start 0x0 length 0x4000 00:25:33.019 NVMe0n1 : 15.01 10141.56 39.62 543.56 0.00 11943.82 785.07 30583.47 00:25:33.019 [2024-12-09T10:40:25.181Z] =================================================================================================================== 00:25:33.019 [2024-12-09T10:40:25.181Z] Total : 10141.56 39.62 543.56 0.00 11943.82 785.07 30583.47 00:25:33.019 Received shutdown signal, test time was about 15.000000 seconds 00:25:33.019 00:25:33.019 Latency(us) 00:25:33.019 [2024-12-09T10:40:25.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.019 [2024-12-09T10:40:25.181Z] =================================================================================================================== 00:25:33.019 [2024-12-09T10:40:25.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3643415 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3643415 /var/tmp/bdevperf.sock 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3643415 ']' 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:33.019 11:40:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.019 [2024-12-09 11:40:25.028594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.019 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.284 [2024-12-09 11:40:25.205071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:33.284 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.547 NVMe0n1 00:25:33.547 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.809 00:25:33.809 11:40:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.380 00:25:34.380 11:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.380 11:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:34.380 11:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.641 11:40:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:37.944 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.944 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:37.944 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.944 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3644432 00:25:37.944 11:40:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3644432 00:25:38.888 { 00:25:38.888 "results": [ 00:25:38.888 { 00:25:38.888 "job": "NVMe0n1", 00:25:38.888 "core_mask": "0x1", 00:25:38.888 "workload": "verify", 00:25:38.888 "status": "finished", 00:25:38.888 "verify_range": { 00:25:38.888 "start": 0, 00:25:38.888 "length": 16384 00:25:38.888 }, 00:25:38.888 "queue_depth": 128, 00:25:38.888 "io_size": 4096, 00:25:38.888 "runtime": 1.006717, 00:25:38.888 "iops": 11094.478388663349, 00:25:38.888 "mibps": 43.337806205716205, 00:25:38.888 "io_failed": 0, 00:25:38.888 "io_timeout": 0, 00:25:38.888 "avg_latency_us": 11478.631096785746, 00:25:38.889 "min_latency_us": 1010.3466666666667, 00:25:38.889 "max_latency_us": 10103.466666666667 00:25:38.889 } 00:25:38.889 ], 00:25:38.889 "core_count": 1 00:25:38.889 } 00:25:38.889 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.889 [2024-12-09 11:40:24.681388] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:25:38.889 [2024-12-09 11:40:24.681445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643415 ] 00:25:38.889 [2024-12-09 11:40:24.753706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.889 [2024-12-09 11:40:24.788260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.889 [2024-12-09 11:40:26.668812] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:38.889 [2024-12-09 11:40:26.668858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.889 [2024-12-09 11:40:26.668870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.889 [2024-12-09 11:40:26.668880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.889 [2024-12-09 11:40:26.668888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.889 [2024-12-09 11:40:26.668896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.889 [2024-12-09 11:40:26.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.889 [2024-12-09 11:40:26.668912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.889 [2024-12-09 11:40:26.668919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.889 [2024-12-09 11:40:26.668931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:38.889 [2024-12-09 11:40:26.668960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:38.889 [2024-12-09 11:40:26.668975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2c930 (9): Bad file descriptor 00:25:38.889 [2024-12-09 11:40:26.681565] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:38.889 Running I/O for 1 seconds... 00:25:38.889 11034.00 IOPS, 43.10 MiB/s 00:25:38.889 Latency(us) 00:25:38.889 [2024-12-09T10:40:31.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.889 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:38.889 Verification LBA range: start 0x0 length 0x4000 00:25:38.889 NVMe0n1 : 1.01 11094.48 43.34 0.00 0.00 11478.63 1010.35 10103.47 00:25:38.889 [2024-12-09T10:40:31.051Z] =================================================================================================================== 00:25:38.889 [2024-12-09T10:40:31.051Z] Total : 11094.48 43.34 0.00 0.00 11478.63 1010.35 10103.47 00:25:38.889 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.889 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:39.150 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.410 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.410 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:39.410 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.670 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3643415 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3643415 ']' 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3643415 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3643415 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3643415' 00:25:42.968 killing process with pid 3643415 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3643415 00:25:42.968 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3643415 00:25:42.968 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:42.968 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.227 rmmod nvme_tcp 00:25:43.227 rmmod nvme_fabrics 00:25:43.227 rmmod nvme_keyring 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3639704 ']' 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3639704 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3639704 ']' 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3639704 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.227 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639704 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639704' 00:25:43.487 killing process with pid 3639704 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3639704 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3639704 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:43.487 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.488 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.032 00:25:46.032 real 0m39.496s 00:25:46.032 user 2m1.093s 00:25:46.032 sys 0m8.403s 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:46.032 ************************************ 00:25:46.032 END TEST nvmf_failover 00:25:46.032 ************************************ 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.032 ************************************ 00:25:46.032 START TEST nvmf_host_discovery 00:25:46.032 ************************************ 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.032 * Looking for test storage... 00:25:46.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.032 --rc genhtml_branch_coverage=1 00:25:46.032 --rc genhtml_function_coverage=1 00:25:46.032 --rc genhtml_legend=1 00:25:46.032 --rc geninfo_all_blocks=1 00:25:46.032 --rc geninfo_unexecuted_blocks=1 00:25:46.032 00:25:46.032 ' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.032 --rc genhtml_branch_coverage=1 00:25:46.032 --rc genhtml_function_coverage=1 00:25:46.032 --rc genhtml_legend=1 00:25:46.032 --rc geninfo_all_blocks=1 00:25:46.032 --rc geninfo_unexecuted_blocks=1 00:25:46.032 00:25:46.032 ' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.032 --rc genhtml_branch_coverage=1 00:25:46.032 --rc genhtml_function_coverage=1 00:25:46.032 --rc genhtml_legend=1 00:25:46.032 --rc geninfo_all_blocks=1 00:25:46.032 --rc geninfo_unexecuted_blocks=1 00:25:46.032 00:25:46.032 ' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.032 --rc genhtml_branch_coverage=1 00:25:46.032 --rc genhtml_function_coverage=1 00:25:46.032 --rc genhtml_legend=1 00:25:46.032 --rc geninfo_all_blocks=1 00:25:46.032 --rc geninfo_unexecuted_blocks=1 00:25:46.032 00:25:46.032 ' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.032 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.033 11:40:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:54.173 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:54.173 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.173 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:54.174 Found net devices under 0000:31:00.0: cvl_0_0 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:54.174 Found net devices under 0000:31:00.1: cvl_0_1 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.174 11:40:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:25:54.174 00:25:54.174 --- 10.0.0.2 ping statistics --- 00:25:54.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.174 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:25:54.174 00:25:54.174 --- 10.0.0.1 ping statistics --- 00:25:54.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.174 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3649829 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3649829 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3649829 ']' 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.174 11:40:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 [2024-12-09 11:40:45.421754] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:25:54.174 [2024-12-09 11:40:45.421818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.174 [2024-12-09 11:40:45.521518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.174 [2024-12-09 11:40:45.571892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.174 [2024-12-09 11:40:45.571946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.174 [2024-12-09 11:40:45.571955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.174 [2024-12-09 11:40:45.571962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.174 [2024-12-09 11:40:45.571968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.174 [2024-12-09 11:40:45.572798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 [2024-12-09 11:40:46.289375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 [2024-12-09 11:40:46.301606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 null0 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.174 null1 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.174 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:54.175 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.175 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3649859 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3649859 /tmp/host.sock 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3649859 ']' 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.435 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:54.436 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:54.436 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.436 11:40:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.436 [2024-12-09 11:40:46.409089] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:25:54.436 [2024-12-09 11:40:46.409166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649859 ] 00:25:54.436 [2024-12-09 11:40:46.488935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.436 [2024-12-09 11:40:46.532159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.377 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.378 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 [2024-12-09 11:40:47.556773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:55.639 11:40:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:56.210 [2024-12-09 11:40:48.269195] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:56.210 [2024-12-09 11:40:48.269216] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:56.210 [2024-12-09 11:40:48.269230] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.210 [2024-12-09 11:40:48.357502] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:56.470 [2024-12-09 11:40:48.576737] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:56.470 [2024-12-09 11:40:48.577735] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bc4190:1 started. 00:25:56.470 [2024-12-09 11:40:48.579352] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.470 [2024-12-09 11:40:48.579369] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.470 [2024-12-09 11:40:48.587522] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bc4190 was disconnected and freed. delete nvme_qpair. 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.731 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.732 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.995 11:40:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.995 [2024-12-09 11:40:49.000994] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bc45d0:1 started. 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.995 [2024-12-09 11:40:49.007755] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bc45d0 was disconnected and freed. delete nvme_qpair. 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.995 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.996 [2024-12-09 11:40:49.088762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:56.996 [2024-12-09 11:40:49.089110] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:56.996 [2024-12-09 11:40:49.089132] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.996 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.257 [2024-12-09 11:40:49.175384] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:57.257 11:40:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:57.257 [2024-12-09 11:40:49.275175] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:57.257 [2024-12-09 11:40:49.275213] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:57.257 [2024-12-09 11:40:49.275222] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:57.257 [2024-12-09 11:40:49.275227] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.198 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.198 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.198 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:58.198 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.199 [2024-12-09 11:40:50.332162] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:58.199 [2024-12-09 11:40:50.332189] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:58.199 [2024-12-09 11:40:50.341046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.199 [2024-12-09 11:40:50.341067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.199 [2024-12-09 11:40:50.341077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.199 [2024-12-09 11:40:50.341085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.199 [2024-12-09 11:40:50.341093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.199 [2024-12-09 11:40:50.341100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.199 [2024-12-09 11:40:50.341108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.199 [2024-12-09 11:40:50.341116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.199 [2024-12-09 11:40:50.341123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.199 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.199 [2024-12-09 11:40:50.351058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.461 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.461 [2024-12-09 11:40:50.361095] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.461 [2024-12-09 11:40:50.361108] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.461 [2024-12-09 11:40:50.361115] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.461 [2024-12-09 11:40:50.361121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.461 [2024-12-09 11:40:50.361146] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.461 [2024-12-09 11:40:50.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.461 [2024-12-09 11:40:50.361397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.461 [2024-12-09 11:40:50.361406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.461 [2024-12-09 11:40:50.361418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.461 [2024-12-09 11:40:50.361436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.461 [2024-12-09 11:40:50.361444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.461 [2024-12-09 11:40:50.361453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.461 [2024-12-09 11:40:50.361460] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.461 [2024-12-09 11:40:50.361466] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.461 [2024-12-09 11:40:50.361471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.461 [2024-12-09 11:40:50.371176] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.461 [2024-12-09 11:40:50.371189] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.461 [2024-12-09 11:40:50.371194] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.461 [2024-12-09 11:40:50.371199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.461 [2024-12-09 11:40:50.371215] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.461 [2024-12-09 11:40:50.371410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.461 [2024-12-09 11:40:50.371422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.461 [2024-12-09 11:40:50.371430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.461 [2024-12-09 11:40:50.371441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.461 [2024-12-09 11:40:50.371452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.461 [2024-12-09 11:40:50.371459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.461 [2024-12-09 11:40:50.371467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.461 [2024-12-09 11:40:50.371473] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.461 [2024-12-09 11:40:50.371478] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.461 [2024-12-09 11:40:50.371482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.461 [2024-12-09 11:40:50.381246] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.462 [2024-12-09 11:40:50.381259] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.462 [2024-12-09 11:40:50.381264] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.381273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.462 [2024-12-09 11:40:50.381288] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.381574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.462 [2024-12-09 11:40:50.381587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.462 [2024-12-09 11:40:50.381594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.462 [2024-12-09 11:40:50.381606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.462 [2024-12-09 11:40:50.381623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.462 [2024-12-09 11:40:50.381630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.462 [2024-12-09 11:40:50.381638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.462 [2024-12-09 11:40:50.381644] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.462 [2024-12-09 11:40:50.381649] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.462 [2024-12-09 11:40:50.381653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:58.462 [2024-12-09 11:40:50.391320] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.462 [2024-12-09 11:40:50.391332] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.462 [2024-12-09 11:40:50.391336] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.391341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.462 [2024-12-09 11:40:50.391355] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.391643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.462 [2024-12-09 11:40:50.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.462 [2024-12-09 11:40:50.391662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.462 [2024-12-09 11:40:50.391672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.462 [2024-12-09 11:40:50.391689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.462 [2024-12-09 11:40:50.391703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.462 [2024-12-09 11:40:50.391710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.462 [2024-12-09 11:40:50.391716] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.462 [2024-12-09 11:40:50.391721] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.462 [2024-12-09 11:40:50.391725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.462 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.462 [2024-12-09 11:40:50.401386] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.462 [2024-12-09 11:40:50.401400] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.462 [2024-12-09 11:40:50.401405] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.401410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.462 [2024-12-09 11:40:50.401425] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.401922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.462 [2024-12-09 11:40:50.401934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.462 [2024-12-09 11:40:50.401942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.462 [2024-12-09 11:40:50.401953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.462 [2024-12-09 11:40:50.401971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.462 [2024-12-09 11:40:50.401978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.462 [2024-12-09 11:40:50.401985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.462 [2024-12-09 11:40:50.401991] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.462 [2024-12-09 11:40:50.401996] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.462 [2024-12-09 11:40:50.402001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.462 [2024-12-09 11:40:50.411456] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.462 [2024-12-09 11:40:50.411468] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.462 [2024-12-09 11:40:50.411473] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.411478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.462 [2024-12-09 11:40:50.411491] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.411781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.462 [2024-12-09 11:40:50.411793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.462 [2024-12-09 11:40:50.411800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.462 [2024-12-09 11:40:50.411811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.462 [2024-12-09 11:40:50.411822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.462 [2024-12-09 11:40:50.411828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.462 [2024-12-09 11:40:50.411836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.462 [2024-12-09 11:40:50.411842] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.462 [2024-12-09 11:40:50.411846] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.462 [2024-12-09 11:40:50.411851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.462 [2024-12-09 11:40:50.421523] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.462 [2024-12-09 11:40:50.421537] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.462 [2024-12-09 11:40:50.421541] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.421546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.462 [2024-12-09 11:40:50.421561] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.462 [2024-12-09 11:40:50.421725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.462 [2024-12-09 11:40:50.421737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.462 [2024-12-09 11:40:50.421744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.462 [2024-12-09 11:40:50.421755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.462 [2024-12-09 11:40:50.421765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.462 [2024-12-09 11:40:50.421772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.462 [2024-12-09 11:40:50.421779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.463 [2024-12-09 11:40:50.421786] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.463 [2024-12-09 11:40:50.421791] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.463 [2024-12-09 11:40:50.421795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.463 [2024-12-09 11:40:50.431592] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.463 [2024-12-09 11:40:50.431603] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.463 [2024-12-09 11:40:50.431608] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.463 [2024-12-09 11:40:50.431612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.463 [2024-12-09 11:40:50.431630] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.463 [2024-12-09 11:40:50.431928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.463 [2024-12-09 11:40:50.431940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.463 [2024-12-09 11:40:50.431947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.463 [2024-12-09 11:40:50.431958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.463 [2024-12-09 11:40:50.431968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.463 [2024-12-09 11:40:50.431974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.463 [2024-12-09 11:40:50.431981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.463 [2024-12-09 11:40:50.431987] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.463 [2024-12-09 11:40:50.431992] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.463 [2024-12-09 11:40:50.431996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:58.463 [2024-12-09 11:40:50.441661] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.463 [2024-12-09 11:40:50.441673] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.463 [2024-12-09 11:40:50.441678] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.463 [2024-12-09 11:40:50.441682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.463 [2024-12-09 11:40:50.441696] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.463 [2024-12-09 11:40:50.441980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.463 [2024-12-09 11:40:50.441993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.463 [2024-12-09 11:40:50.442000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.463 [2024-12-09 11:40:50.442016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.463 [2024-12-09 11:40:50.442027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.463 [2024-12-09 11:40:50.442033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.463 [2024-12-09 11:40:50.442040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.463 [2024-12-09 11:40:50.442046] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.463 [2024-12-09 11:40:50.442054] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.463 [2024-12-09 11:40:50.442059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.463 [2024-12-09 11:40:50.451728] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.463 [2024-12-09 11:40:50.451741] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.463 [2024-12-09 11:40:50.451745] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.463 [2024-12-09 11:40:50.451750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.463 [2024-12-09 11:40:50.451764] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.463 [2024-12-09 11:40:50.452261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.463 [2024-12-09 11:40:50.452299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b947d0 with addr=10.0.0.2, port=4420 00:25:58.463 [2024-12-09 11:40:50.452310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b947d0 is same with the state(6) to be set 00:25:58.463 [2024-12-09 11:40:50.452329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b947d0 (9): Bad file descriptor 00:25:58.463 [2024-12-09 11:40:50.452354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.463 [2024-12-09 11:40:50.452362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.463 [2024-12-09 11:40:50.452370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.463 [2024-12-09 11:40:50.452377] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.463 [2024-12-09 11:40:50.452383] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.463 [2024-12-09 11:40:50.452388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.463 [2024-12-09 11:40:50.459371] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:58.463 [2024-12-09 11:40:50.459392] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:58.463 11:40:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.403 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.663 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.663 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.664 11:40:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 [2024-12-09 11:40:52.830188] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.051 [2024-12-09 11:40:52.830207] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.051 [2024-12-09 11:40:52.830220] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.051 [2024-12-09 11:40:52.916491] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.051 [2024-12-09 11:40:53.021348] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:01.051 [2024-12-09 11:40:53.022119] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1bac2b0:1 started. 00:26:01.051 [2024-12-09 11:40:53.023987] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.051 [2024-12-09 11:40:53.024023] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 request: 00:26:01.051 { 00:26:01.051 "name": "nvme", 00:26:01.051 "trtype": "tcp", 00:26:01.051 "traddr": "10.0.0.2", 00:26:01.051 "adrfam": "ipv4", 00:26:01.051 "trsvcid": "8009", 00:26:01.051 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.051 "wait_for_attach": true, 00:26:01.051 "method": "bdev_nvme_start_discovery", 00:26:01.051 "req_id": 1 00:26:01.051 } 00:26:01.051 Got JSON-RPC error response 00:26:01.051 response: 00:26:01.051 { 00:26:01.051 "code": -17, 00:26:01.051 "message": "File exists" 00:26:01.051 } 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.051 [2024-12-09 11:40:53.069830] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1bac2b0 was disconnected and freed. delete nvme_qpair. 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.051 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.052 request: 00:26:01.052 { 00:26:01.052 "name": "nvme_second", 00:26:01.052 "trtype": "tcp", 00:26:01.052 "traddr": "10.0.0.2", 00:26:01.052 "adrfam": "ipv4", 00:26:01.052 "trsvcid": "8009", 00:26:01.052 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.052 "wait_for_attach": true, 00:26:01.052 "method": "bdev_nvme_start_discovery", 00:26:01.052 "req_id": 1 00:26:01.052 } 00:26:01.052 Got JSON-RPC error response 00:26:01.052 response: 00:26:01.052 { 00:26:01.052 "code": -17, 00:26:01.052 "message": "File exists" 00:26:01.052 } 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.052 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.312 11:40:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.255 [2024-12-09 11:40:54.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.255 [2024-12-09 11:40:54.279364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac490 with addr=10.0.0.2, port=8010 00:26:02.255 [2024-12-09 11:40:54.279379] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.255 [2024-12-09 11:40:54.279387] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.255 [2024-12-09 11:40:54.279399] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.194 [2024-12-09 11:40:55.281792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.194 [2024-12-09 11:40:55.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bac490 with addr=10.0.0.2, port=8010 00:26:03.194 [2024-12-09 11:40:55.281826] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.194 [2024-12-09 11:40:55.281832] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.194 [2024-12-09 11:40:55.281839] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.136 [2024-12-09 11:40:56.283820] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.136 request: 00:26:04.136 { 00:26:04.136 "name": "nvme_second", 00:26:04.136 "trtype": "tcp", 00:26:04.137 "traddr": "10.0.0.2", 00:26:04.137 "adrfam": "ipv4", 00:26:04.137 "trsvcid": "8010", 00:26:04.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.137 "wait_for_attach": false, 00:26:04.137 "attach_timeout_ms": 3000, 00:26:04.137 "method": "bdev_nvme_start_discovery", 00:26:04.137 "req_id": 1 00:26:04.137 } 00:26:04.137 Got JSON-RPC error response 00:26:04.137 response: 00:26:04.137 { 00:26:04.137 "code": -110, 00:26:04.137 "message": "Connection timed out" 00:26:04.137 } 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:04.137 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.397 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3649859 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.398 rmmod nvme_tcp 00:26:04.398 rmmod nvme_fabrics 00:26:04.398 rmmod nvme_keyring 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3649829 ']' 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3649829 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3649829 ']' 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3649829 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3649829 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3649829' 00:26:04.398 killing process with pid 3649829 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3649829 00:26:04.398 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3649829 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.659 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.660 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.660 11:40:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:06.575 00:26:06.575 real 0m20.966s 00:26:06.575 user 0m25.020s 00:26:06.575 sys 0m7.150s 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.575 ************************************ 00:26:06.575 END TEST nvmf_host_discovery 00:26:06.575 ************************************ 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.575 11:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.837 ************************************ 00:26:06.837 START TEST nvmf_host_multipath_status 00:26:06.837 ************************************ 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:06.837 * Looking for test storage... 00:26:06.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:06.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.837 --rc genhtml_branch_coverage=1 00:26:06.837 --rc genhtml_function_coverage=1 00:26:06.837 --rc genhtml_legend=1 00:26:06.837 --rc geninfo_all_blocks=1 00:26:06.837 --rc geninfo_unexecuted_blocks=1 00:26:06.837 00:26:06.837 ' 00:26:06.837 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:06.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.837 --rc genhtml_branch_coverage=1 00:26:06.837 --rc genhtml_function_coverage=1 00:26:06.837 --rc genhtml_legend=1 00:26:06.837 --rc geninfo_all_blocks=1 00:26:06.838 --rc geninfo_unexecuted_blocks=1 00:26:06.838 00:26:06.838 ' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:06.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.838 --rc genhtml_branch_coverage=1 00:26:06.838 --rc genhtml_function_coverage=1 00:26:06.838 --rc genhtml_legend=1 00:26:06.838 --rc geninfo_all_blocks=1 00:26:06.838 --rc geninfo_unexecuted_blocks=1 00:26:06.838 00:26:06.838 ' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:06.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.838 --rc genhtml_branch_coverage=1 00:26:06.838 --rc genhtml_function_coverage=1 00:26:06.838 --rc genhtml_legend=1 00:26:06.838 --rc geninfo_all_blocks=1 00:26:06.838 --rc geninfo_unexecuted_blocks=1 00:26:06.838 00:26:06.838 ' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:06.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:06.838 11:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:14.982 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.982 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:14.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:14.983 Found net devices under 0000:31:00.0: cvl_0_0 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:14.983 Found net devices under 0000:31:00.1: cvl_0_1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:26:14.983 00:26:14.983 --- 10.0.0.2 ping statistics --- 00:26:14.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.983 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:14.983 00:26:14.983 --- 10.0.0.1 ping statistics --- 00:26:14.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.983 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3656434 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3656434 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3656434 ']' 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.983 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.984 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.984 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.984 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.984 [2024-12-09 11:41:06.671798] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:26:14.984 [2024-12-09 11:41:06.671864] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.984 [2024-12-09 11:41:06.755781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:14.984 [2024-12-09 11:41:06.797037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.984 [2024-12-09 11:41:06.797075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.984 [2024-12-09 11:41:06.797083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.984 [2024-12-09 11:41:06.797089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.984 [2024-12-09 11:41:06.797095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.984 [2024-12-09 11:41:06.798338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.984 [2024-12-09 11:41:06.798340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3656434 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:15.555 [2024-12-09 11:41:07.645928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.555 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:15.816 Malloc0 00:26:15.816 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:16.077 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.077 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.338 [2024-12-09 11:41:08.335506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.338 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.338 [2024-12-09 11:41:08.491840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3656800 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3656800 /var/tmp/bdevperf.sock 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3656800 ']' 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:16.599 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:16.860 11:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:17.120 Nvme0n1 00:26:17.120 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:17.693 Nvme0n1 00:26:17.693 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:17.693 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:19.611 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:19.611 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:19.872 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.872 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:20.815 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:20.815 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.815 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.815 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.075 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.075 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:21.075 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.075 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.335 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.335 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.335 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.335 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.595 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.854 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.854 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.854 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.854 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.114 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.114 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:22.114 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:22.114 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:22.374 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:23.316 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:23.316 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:23.316 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.316 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.577 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.577 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:23.577 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.577 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.839 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.839 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.839 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.839 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.101 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.101 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.101 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.362 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.362 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:24.362 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.362 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.623 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.623 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:24.623 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.623 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:24.883 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:25.824 11:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:25.824 11:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.825 11:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.825 11:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.086 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.086 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:26.086 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.086 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.346 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.347 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.607 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.607 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.607 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.607 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:26.867 11:41:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:27.127 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:27.387 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:28.328 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:28.328 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.328 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.328 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.589 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.850 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.850 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.850 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.850 11:41:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.110 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.110 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.110 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.110 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:29.370 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:29.630 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:29.891 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:30.833 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:30.833 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:30.833 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.833 11:41:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.093 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.353 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.353 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.353 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.353 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.615 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.875 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.875 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:31.875 11:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:32.136 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.136 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.523 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.799 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.799 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.799 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.800 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.800 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.800 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:33.800 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.800 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.074 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.074 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.074 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.074 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.350 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.350 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:34.350 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:34.350 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:34.626 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:34.908 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:35.909 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:35.909 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.909 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.909 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.909 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.909 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.909 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.909 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:36.197 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.197 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:36.197 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.197 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.484 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.763 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.763 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.763 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.763 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.038 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.038 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:37.038 11:41:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:37.038 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:37.304 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:38.305 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:38.305 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:38.305 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.305 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.580 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.581 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.581 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.858 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.858 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.858 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.858 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.132 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.424 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.424 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:39.424 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:39.716 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:39.716 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:40.711 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:40.711 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.711 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.711 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.001 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.001 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:41.001 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.001 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.292 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.583 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.584 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:41.584 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.584 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:41.867 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:42.128 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:42.389 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:43.331 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:43.331 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:43.331 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.331 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.591 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.851 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.851 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.851 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.851 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.111 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3656800 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3656800 ']' 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3656800 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3656800 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3656800' 00:26:44.372 killing process with pid 3656800 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3656800 00:26:44.372 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3656800 00:26:44.372 { 00:26:44.372 "results": [ 00:26:44.372 { 00:26:44.372 "job": "Nvme0n1", 00:26:44.372 "core_mask": "0x4", 00:26:44.372 "workload": "verify", 00:26:44.372 "status": "terminated", 00:26:44.372 "verify_range": { 00:26:44.372 "start": 0, 00:26:44.372 "length": 16384 00:26:44.372 }, 00:26:44.372 "queue_depth": 128, 00:26:44.372 "io_size": 4096, 00:26:44.372 "runtime": 26.714081, 00:26:44.372 "iops": 10680.172752339862, 00:26:44.372 "mibps": 41.71942481382759, 00:26:44.372 "io_failed": 0, 00:26:44.372 "io_timeout": 0, 00:26:44.372 "avg_latency_us": 11964.788769962133, 00:26:44.372 "min_latency_us": 262.82666666666665, 00:26:44.372 "max_latency_us": 3019898.88 00:26:44.372 } 00:26:44.372 ], 00:26:44.372 "core_count": 1 00:26:44.372 } 00:26:44.636 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3656800 00:26:44.636 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.636 [2024-12-09 11:41:08.564478] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:26:44.636 [2024-12-09 11:41:08.564553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656800 ] 00:26:44.636 [2024-12-09 11:41:08.625211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.636 [2024-12-09 11:41:08.653956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.636 Running I/O for 90 seconds... 00:26:44.636 9367.00 IOPS, 36.59 MiB/s [2024-12-09T10:41:36.798Z] 9482.00 IOPS, 37.04 MiB/s [2024-12-09T10:41:36.798Z] 9466.00 IOPS, 36.98 MiB/s [2024-12-09T10:41:36.798Z] 9460.50 IOPS, 36.96 MiB/s [2024-12-09T10:41:36.798Z] 9793.80 IOPS, 38.26 MiB/s [2024-12-09T10:41:36.798Z] 10303.17 IOPS, 40.25 MiB/s [2024-12-09T10:41:36.798Z] 10714.43 IOPS, 41.85 MiB/s [2024-12-09T10:41:36.798Z] 10670.75 IOPS, 41.68 MiB/s [2024-12-09T10:41:36.798Z] 10533.11 IOPS, 41.14 MiB/s [2024-12-09T10:41:36.798Z] 10432.20 IOPS, 40.75 MiB/s [2024-12-09T10:41:36.798Z] 10352.64 IOPS, 40.44 MiB/s [2024-12-09T10:41:36.798Z] [2024-12-09 11:41:21.626329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.626984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.626990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.627000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.627006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.636 [2024-12-09 11:41:21.627026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.636 [2024-12-09 11:41:21.627037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.637 [2024-12-09 11:41:21.627503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.637 [2024-12-09 11:41:21.628026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.637 [2024-12-09 11:41:21.628033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.638 [2024-12-09 11:41:21.628948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.628982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.628987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.638 [2024-12-09 11:41:21.629806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.638 [2024-12-09 11:41:21.629822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.629984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.629990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:21.630252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:21.630417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:21.630422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.639 10219.25 IOPS, 39.92 MiB/s [2024-12-09T10:41:36.801Z] 9433.15 IOPS, 36.85 MiB/s [2024-12-09T10:41:36.801Z] 8759.36 IOPS, 34.22 MiB/s [2024-12-09T10:41:36.801Z] 8251.80 IOPS, 32.23 MiB/s [2024-12-09T10:41:36.801Z] 8558.06 IOPS, 33.43 MiB/s [2024-12-09T10:41:36.801Z] 8828.35 IOPS, 34.49 MiB/s [2024-12-09T10:41:36.801Z] 9254.44 IOPS, 36.15 MiB/s [2024-12-09T10:41:36.801Z] 9634.63 IOPS, 37.64 MiB/s [2024-12-09T10:41:36.801Z] 9883.45 IOPS, 38.61 MiB/s [2024-12-09T10:41:36.801Z] 10027.29 IOPS, 39.17 MiB/s [2024-12-09T10:41:36.801Z] 10151.36 IOPS, 39.65 MiB/s [2024-12-09T10:41:36.801Z] 10409.13 IOPS, 40.66 MiB/s [2024-12-09T10:41:36.801Z] 10662.17 IOPS, 41.65 MiB/s [2024-12-09T10:41:36.801Z] [2024-12-09 11:41:34.266903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:34.266940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:34.267060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:34.267084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.639 [2024-12-09 11:41:34.267100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:34.267116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:34.267132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:34.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.639 [2024-12-09 11:41:34.267160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.639 [2024-12-09 11:41:34.267165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.640 [2024-12-09 11:41:34.267894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.640 [2024-12-09 11:41:34.267906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.640 [2024-12-09 11:41:34.267918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.640 [2024-12-09 11:41:34.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.640 [2024-12-09 11:41:34.267934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.640 [2024-12-09 11:41:34.267939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.640 [2024-12-09 11:41:34.267950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.640 [2024-12-09 11:41:34.267956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.640 10764.80 IOPS, 42.05 MiB/s [2024-12-09T10:41:36.802Z] 10713.69 IOPS, 41.85 MiB/s [2024-12-09T10:41:36.802Z] Received shutdown signal, test time was about 26.714692 seconds 00:26:44.640 00:26:44.640 Latency(us) 00:26:44.640 [2024-12-09T10:41:36.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.640 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:44.640 Verification LBA range: start 0x0 length 0x4000 00:26:44.640 Nvme0n1 : 26.71 10680.17 41.72 0.00 0.00 11964.79 262.83 3019898.88 00:26:44.640 [2024-12-09T10:41:36.802Z] =================================================================================================================== 00:26:44.640 [2024-12-09T10:41:36.802Z] Total : 10680.17 41.72 0.00 0.00 11964.79 262.83 3019898.88 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.640 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.640 rmmod nvme_tcp 00:26:44.640 rmmod nvme_fabrics 00:26:44.640 rmmod nvme_keyring 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3656434 ']' 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3656434 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3656434 ']' 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3656434 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3656434 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3656434' 00:26:44.900 killing process with pid 3656434 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3656434 00:26:44.900 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3656434 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.900 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:47.445 00:26:47.445 real 0m40.342s 00:26:47.445 user 1m43.441s 00:26:47.445 sys 0m11.547s 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:47.445 ************************************ 00:26:47.445 END TEST nvmf_host_multipath_status 00:26:47.445 ************************************ 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.445 ************************************ 00:26:47.445 START TEST nvmf_discovery_remove_ifc 00:26:47.445 ************************************ 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:47.445 * Looking for test storage... 00:26:47.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.445 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.446 --rc genhtml_branch_coverage=1 00:26:47.446 --rc genhtml_function_coverage=1 00:26:47.446 --rc genhtml_legend=1 00:26:47.446 --rc geninfo_all_blocks=1 00:26:47.446 --rc geninfo_unexecuted_blocks=1 00:26:47.446 00:26:47.446 ' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.446 --rc genhtml_branch_coverage=1 00:26:47.446 --rc genhtml_function_coverage=1 00:26:47.446 --rc genhtml_legend=1 00:26:47.446 --rc geninfo_all_blocks=1 00:26:47.446 --rc geninfo_unexecuted_blocks=1 00:26:47.446 00:26:47.446 ' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.446 --rc genhtml_branch_coverage=1 00:26:47.446 --rc genhtml_function_coverage=1 00:26:47.446 --rc genhtml_legend=1 00:26:47.446 --rc geninfo_all_blocks=1 00:26:47.446 --rc geninfo_unexecuted_blocks=1 00:26:47.446 00:26:47.446 ' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.446 --rc genhtml_branch_coverage=1 00:26:47.446 --rc genhtml_function_coverage=1 00:26:47.446 --rc genhtml_legend=1 00:26:47.446 --rc geninfo_all_blocks=1 00:26:47.446 --rc geninfo_unexecuted_blocks=1 00:26:47.446 00:26:47.446 ' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.446 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:47.447 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:47.447 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:47.447 11:41:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:55.598 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:55.598 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:55.598 Found net devices under 0000:31:00.0: cvl_0_0 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:55.598 Found net devices under 0000:31:00.1: cvl_0_1 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:55.598 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:55.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:26:55.599 00:26:55.599 --- 10.0.0.2 ping statistics --- 00:26:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.599 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:26:55.599 00:26:55.599 --- 10.0.0.1 ping statistics --- 00:26:55.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.599 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3666799 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3666799 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3666799 ']' 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.599 11:41:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.599 [2024-12-09 11:41:46.971537] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:26:55.599 [2024-12-09 11:41:46.971602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.599 [2024-12-09 11:41:47.072047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.599 [2024-12-09 11:41:47.122173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.599 [2024-12-09 11:41:47.122225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.599 [2024-12-09 11:41:47.122235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.599 [2024-12-09 11:41:47.122242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.599 [2024-12-09 11:41:47.122248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.599 [2024-12-09 11:41:47.123047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.860 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.861 [2024-12-09 11:41:47.838282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.861 [2024-12-09 11:41:47.846512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:55.861 null0 00:26:55.861 [2024-12-09 11:41:47.878480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3666835 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3666835 /tmp/host.sock 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3666835 ']' 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:55.861 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.861 11:41:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.861 [2024-12-09 11:41:47.955188] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:26:55.861 [2024-12-09 11:41:47.955252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666835 ] 00:26:56.122 [2024-12-09 11:41:48.033972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.122 [2024-12-09 11:41:48.076706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.693 11:41:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.077 [2024-12-09 11:41:49.836608] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.077 [2024-12-09 11:41:49.836629] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.077 [2024-12-09 11:41:49.836643] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.077 [2024-12-09 11:41:49.923916] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:58.077 [2024-12-09 11:41:50.026995] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:58.077 [2024-12-09 11:41:50.027978] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8af110:1 started. 00:26:58.077 [2024-12-09 11:41:50.029552] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:58.077 [2024-12-09 11:41:50.029598] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:58.077 [2024-12-09 11:41:50.029620] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:58.077 [2024-12-09 11:41:50.029634] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:58.077 [2024-12-09 11:41:50.029657] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.077 [2024-12-09 11:41:50.035404] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8af110 was disconnected and freed. delete nvme_qpair. 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.077 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.338 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.338 11:41:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.280 11:41:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.223 11:41:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.605 11:41:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.545 11:41:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.497 [2024-12-09 11:41:55.470334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:03.497 [2024-12-09 11:41:55.470382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.497 [2024-12-09 11:41:55.470395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.497 [2024-12-09 11:41:55.470405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.497 [2024-12-09 11:41:55.470412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.497 [2024-12-09 11:41:55.470420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.497 [2024-12-09 11:41:55.470428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.497 [2024-12-09 11:41:55.470436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.497 [2024-12-09 11:41:55.470443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.497 [2024-12-09 11:41:55.470451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.497 [2024-12-09 11:41:55.470459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.497 [2024-12-09 11:41:55.470466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bb20 is same with the state(6) to be set 00:27:03.497 [2024-12-09 11:41:55.480357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88bb20 (9): Bad file descriptor 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.497 11:41:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.497 [2024-12-09 11:41:55.490393] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:03.497 [2024-12-09 11:41:55.490405] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:03.497 [2024-12-09 11:41:55.490412] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.497 [2024-12-09 11:41:55.490417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.497 [2024-12-09 11:41:55.490438] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:04.435 [2024-12-09 11:41:56.552036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:04.435 [2024-12-09 11:41:56.552075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88bb20 with addr=10.0.0.2, port=4420 00:27:04.435 [2024-12-09 11:41:56.552087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88bb20 is same with the state(6) to be set 00:27:04.435 [2024-12-09 11:41:56.552107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88bb20 (9): Bad file descriptor 00:27:04.435 [2024-12-09 11:41:56.552476] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:04.435 [2024-12-09 11:41:56.552506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:04.435 [2024-12-09 11:41:56.552514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:04.435 [2024-12-09 11:41:56.552524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:04.435 [2024-12-09 11:41:56.552532] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:04.435 [2024-12-09 11:41:56.552538] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:04.435 [2024-12-09 11:41:56.552543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:04.435 [2024-12-09 11:41:56.552551] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:04.435 [2024-12-09 11:41:56.552556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:04.435 11:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.435 11:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.435 11:41:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:05.821 [2024-12-09 11:41:57.554928] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:05.821 [2024-12-09 11:41:57.554950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.821 [2024-12-09 11:41:57.554963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.821 [2024-12-09 11:41:57.554971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.821 [2024-12-09 11:41:57.554980] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:05.821 [2024-12-09 11:41:57.554987] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.821 [2024-12-09 11:41:57.554993] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.821 [2024-12-09 11:41:57.554998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.821 [2024-12-09 11:41:57.555023] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:05.821 [2024-12-09 11:41:57.555045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.821 [2024-12-09 11:41:57.555055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.821 [2024-12-09 11:41:57.555065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.821 [2024-12-09 11:41:57.555074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.821 [2024-12-09 11:41:57.555082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.821 [2024-12-09 11:41:57.555090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.821 [2024-12-09 11:41:57.555098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.821 [2024-12-09 11:41:57.555106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.821 [2024-12-09 11:41:57.555115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.821 [2024-12-09 11:41:57.555126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.821 [2024-12-09 11:41:57.555134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:05.821 [2024-12-09 11:41:57.555321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87ae60 (9): Bad file descriptor 00:27:05.821 [2024-12-09 11:41:57.556334] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:05.821 [2024-12-09 11:41:57.556345] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.821 11:41:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.764 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.765 11:41:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.706 [2024-12-09 11:41:59.612269] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:07.706 [2024-12-09 11:41:59.612289] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:07.706 [2024-12-09 11:41:59.612303] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:07.706 [2024-12-09 11:41:59.698575] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:07.706 [2024-12-09 11:41:59.799426] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:07.706 [2024-12-09 11:41:59.800182] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x88e000:1 started. 00:27:07.706 [2024-12-09 11:41:59.801395] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.706 [2024-12-09 11:41:59.801426] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.706 [2024-12-09 11:41:59.801446] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.706 [2024-12-09 11:41:59.801459] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:07.706 [2024-12-09 11:41:59.801467] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.706 [2024-12-09 11:41:59.809942] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x88e000 was disconnected and freed. delete nvme_qpair. 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3666835 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3666835 ']' 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3666835 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.706 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666835 00:27:07.967 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.967 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.967 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666835' 00:27:07.967 killing process with pid 3666835 00:27:07.967 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3666835 00:27:07.967 11:41:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3666835 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.967 rmmod nvme_tcp 00:27:07.967 rmmod nvme_fabrics 00:27:07.967 rmmod nvme_keyring 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3666799 ']' 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3666799 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3666799 ']' 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3666799 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.967 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666799 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666799' 00:27:08.228 killing process with pid 3666799 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3666799 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3666799 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.228 11:42:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.772 00:27:10.772 real 0m23.190s 00:27:10.772 user 0m26.950s 00:27:10.772 sys 0m7.004s 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.772 ************************************ 00:27:10.772 END TEST nvmf_discovery_remove_ifc 00:27:10.772 ************************************ 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.772 ************************************ 00:27:10.772 START TEST nvmf_identify_kernel_target 00:27:10.772 ************************************ 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.772 * Looking for test storage... 00:27:10.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.772 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.773 --rc genhtml_branch_coverage=1 00:27:10.773 --rc genhtml_function_coverage=1 00:27:10.773 --rc genhtml_legend=1 00:27:10.773 --rc geninfo_all_blocks=1 00:27:10.773 --rc geninfo_unexecuted_blocks=1 00:27:10.773 00:27:10.773 ' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.773 --rc genhtml_branch_coverage=1 00:27:10.773 --rc genhtml_function_coverage=1 00:27:10.773 --rc genhtml_legend=1 00:27:10.773 --rc geninfo_all_blocks=1 00:27:10.773 --rc geninfo_unexecuted_blocks=1 00:27:10.773 00:27:10.773 ' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.773 --rc genhtml_branch_coverage=1 00:27:10.773 --rc genhtml_function_coverage=1 00:27:10.773 --rc genhtml_legend=1 00:27:10.773 --rc geninfo_all_blocks=1 00:27:10.773 --rc geninfo_unexecuted_blocks=1 00:27:10.773 00:27:10.773 ' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.773 --rc genhtml_branch_coverage=1 00:27:10.773 --rc genhtml_function_coverage=1 00:27:10.773 --rc genhtml_legend=1 00:27:10.773 --rc geninfo_all_blocks=1 00:27:10.773 --rc geninfo_unexecuted_blocks=1 00:27:10.773 00:27:10.773 ' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.773 11:42:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.909 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:18.910 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:18.910 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:18.910 Found net devices under 0000:31:00.0: cvl_0_0 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:18.910 Found net devices under 0000:31:00.1: cvl_0_1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.910 11:42:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:27:18.910 00:27:18.910 --- 10.0.0.2 ping statistics --- 00:27:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.910 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:18.910 00:27:18.910 --- 10.0.0.1 ping statistics --- 00:27:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.910 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:18.910 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:18.911 11:42:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:21.470 Waiting for block devices as requested 00:27:21.731 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.731 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.731 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.731 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.991 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.991 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:21.991 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.252 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.252 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:22.512 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:22.512 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:22.512 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:22.773 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:22.773 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:22.773 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.773 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:23.034 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:23.295 No valid GPT data, bailing 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:23.295 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:23.558 00:27:23.558 Discovery Log Number of Records 2, Generation counter 2 00:27:23.558 =====Discovery Log Entry 0====== 00:27:23.558 trtype: tcp 00:27:23.558 adrfam: ipv4 00:27:23.558 subtype: current discovery subsystem 00:27:23.558 treq: not specified, sq flow control disable supported 00:27:23.558 portid: 1 00:27:23.558 trsvcid: 4420 00:27:23.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:23.558 traddr: 10.0.0.1 00:27:23.558 eflags: none 00:27:23.558 sectype: none 00:27:23.558 =====Discovery Log Entry 1====== 00:27:23.558 trtype: tcp 00:27:23.558 adrfam: ipv4 00:27:23.558 subtype: nvme subsystem 00:27:23.558 treq: not specified, sq flow control disable supported 00:27:23.558 portid: 1 00:27:23.558 trsvcid: 4420 00:27:23.558 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:23.558 traddr: 10.0.0.1 00:27:23.558 eflags: none 00:27:23.558 sectype: none 00:27:23.558 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:23.558 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:23.558 ===================================================== 00:27:23.558 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:23.558 ===================================================== 00:27:23.558 Controller Capabilities/Features 00:27:23.558 ================================ 00:27:23.558 Vendor ID: 0000 00:27:23.558 Subsystem Vendor ID: 0000 00:27:23.558 Serial Number: 940cbeccdaa8f6c06138 00:27:23.558 Model Number: Linux 00:27:23.558 Firmware Version: 6.8.9-20 00:27:23.558 Recommended Arb Burst: 0 00:27:23.558 IEEE OUI Identifier: 00 00 00 00:27:23.558 Multi-path I/O 00:27:23.558 May have multiple subsystem ports: No 00:27:23.558 May have multiple controllers: No 00:27:23.558 Associated with SR-IOV VF: No 00:27:23.558 Max Data Transfer Size: Unlimited 00:27:23.558 Max Number of Namespaces: 0 00:27:23.558 Max Number of I/O Queues: 1024 00:27:23.558 NVMe Specification Version (VS): 1.3 00:27:23.558 NVMe Specification Version (Identify): 1.3 00:27:23.558 Maximum Queue Entries: 1024 00:27:23.558 Contiguous Queues Required: No 00:27:23.558 Arbitration Mechanisms Supported 00:27:23.558 Weighted Round Robin: Not Supported 00:27:23.558 Vendor Specific: Not Supported 00:27:23.558 Reset Timeout: 7500 ms 00:27:23.558 Doorbell Stride: 4 bytes 00:27:23.558 NVM Subsystem Reset: Not Supported 00:27:23.558 Command Sets Supported 00:27:23.558 NVM Command Set: Supported 00:27:23.558 Boot Partition: Not Supported 00:27:23.558 Memory Page Size Minimum: 4096 bytes 00:27:23.558 Memory Page Size Maximum: 4096 bytes 00:27:23.558 Persistent Memory Region: Not Supported 00:27:23.558 Optional Asynchronous Events Supported 00:27:23.558 Namespace Attribute Notices: Not Supported 00:27:23.558 Firmware Activation Notices: Not Supported 00:27:23.558 ANA Change Notices: Not Supported 00:27:23.558 PLE Aggregate Log Change Notices: Not Supported 00:27:23.558 LBA Status Info Alert Notices: Not Supported 00:27:23.558 EGE Aggregate Log Change Notices: Not Supported 00:27:23.558 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.558 Zone Descriptor Change Notices: Not Supported 00:27:23.558 Discovery Log Change Notices: Supported 00:27:23.558 Controller Attributes 00:27:23.558 128-bit Host Identifier: Not Supported 00:27:23.558 Non-Operational Permissive Mode: Not Supported 00:27:23.558 NVM Sets: Not Supported 00:27:23.558 Read Recovery Levels: Not Supported 00:27:23.558 Endurance Groups: Not Supported 00:27:23.558 Predictable Latency Mode: Not Supported 00:27:23.558 Traffic Based Keep ALive: Not Supported 00:27:23.558 Namespace Granularity: Not Supported 00:27:23.558 SQ Associations: Not Supported 00:27:23.558 UUID List: Not Supported 00:27:23.558 Multi-Domain Subsystem: Not Supported 00:27:23.558 Fixed Capacity Management: Not Supported 00:27:23.558 Variable Capacity Management: Not Supported 00:27:23.558 Delete Endurance Group: Not Supported 00:27:23.558 Delete NVM Set: Not Supported 00:27:23.558 Extended LBA Formats Supported: Not Supported 00:27:23.558 Flexible Data Placement Supported: Not Supported 00:27:23.558 00:27:23.558 Controller Memory Buffer Support 00:27:23.558 ================================ 00:27:23.558 Supported: No 00:27:23.558 00:27:23.558 Persistent Memory Region Support 00:27:23.558 ================================ 00:27:23.558 Supported: No 00:27:23.558 00:27:23.558 Admin Command Set Attributes 00:27:23.558 ============================ 00:27:23.558 Security Send/Receive: Not Supported 00:27:23.558 Format NVM: Not Supported 00:27:23.558 Firmware Activate/Download: Not Supported 00:27:23.558 Namespace Management: Not Supported 00:27:23.558 Device Self-Test: Not Supported 00:27:23.558 Directives: Not Supported 00:27:23.558 NVMe-MI: Not Supported 00:27:23.558 Virtualization Management: Not Supported 00:27:23.558 Doorbell Buffer Config: Not Supported 00:27:23.558 Get LBA Status Capability: Not Supported 00:27:23.558 Command & Feature Lockdown Capability: Not Supported 00:27:23.558 Abort Command Limit: 1 00:27:23.558 Async Event Request Limit: 1 00:27:23.558 Number of Firmware Slots: N/A 00:27:23.558 Firmware Slot 1 Read-Only: N/A 00:27:23.558 Firmware Activation Without Reset: N/A 00:27:23.558 Multiple Update Detection Support: N/A 00:27:23.558 Firmware Update Granularity: No Information Provided 00:27:23.558 Per-Namespace SMART Log: No 00:27:23.558 Asymmetric Namespace Access Log Page: Not Supported 00:27:23.558 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:23.558 Command Effects Log Page: Not Supported 00:27:23.558 Get Log Page Extended Data: Supported 00:27:23.558 Telemetry Log Pages: Not Supported 00:27:23.558 Persistent Event Log Pages: Not Supported 00:27:23.558 Supported Log Pages Log Page: May Support 00:27:23.558 Commands Supported & Effects Log Page: Not Supported 00:27:23.558 Feature Identifiers & Effects Log Page:May Support 00:27:23.558 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.558 Data Area 4 for Telemetry Log: Not Supported 00:27:23.558 Error Log Page Entries Supported: 1 00:27:23.558 Keep Alive: Not Supported 00:27:23.558 00:27:23.558 NVM Command Set Attributes 00:27:23.558 ========================== 00:27:23.558 Submission Queue Entry Size 00:27:23.558 Max: 1 00:27:23.558 Min: 1 00:27:23.558 Completion Queue Entry Size 00:27:23.558 Max: 1 00:27:23.558 Min: 1 00:27:23.558 Number of Namespaces: 0 00:27:23.558 Compare Command: Not Supported 00:27:23.558 Write Uncorrectable Command: Not Supported 00:27:23.558 Dataset Management Command: Not Supported 00:27:23.558 Write Zeroes Command: Not Supported 00:27:23.558 Set Features Save Field: Not Supported 00:27:23.558 Reservations: Not Supported 00:27:23.558 Timestamp: Not Supported 00:27:23.558 Copy: Not Supported 00:27:23.558 Volatile Write Cache: Not Present 00:27:23.558 Atomic Write Unit (Normal): 1 00:27:23.558 Atomic Write Unit (PFail): 1 00:27:23.558 Atomic Compare & Write Unit: 1 00:27:23.558 Fused Compare & Write: Not Supported 00:27:23.558 Scatter-Gather List 00:27:23.558 SGL Command Set: Supported 00:27:23.558 SGL Keyed: Not Supported 00:27:23.558 SGL Bit Bucket Descriptor: Not Supported 00:27:23.558 SGL Metadata Pointer: Not Supported 00:27:23.558 Oversized SGL: Not Supported 00:27:23.558 SGL Metadata Address: Not Supported 00:27:23.559 SGL Offset: Supported 00:27:23.559 Transport SGL Data Block: Not Supported 00:27:23.559 Replay Protected Memory Block: Not Supported 00:27:23.559 00:27:23.559 Firmware Slot Information 00:27:23.559 ========================= 00:27:23.559 Active slot: 0 00:27:23.559 00:27:23.559 00:27:23.559 Error Log 00:27:23.559 ========= 00:27:23.559 00:27:23.559 Active Namespaces 00:27:23.559 ================= 00:27:23.559 Discovery Log Page 00:27:23.559 ================== 00:27:23.559 Generation Counter: 2 00:27:23.559 Number of Records: 2 00:27:23.559 Record Format: 0 00:27:23.559 00:27:23.559 Discovery Log Entry 0 00:27:23.559 ---------------------- 00:27:23.559 Transport Type: 3 (TCP) 00:27:23.559 Address Family: 1 (IPv4) 00:27:23.559 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:23.559 Entry Flags: 00:27:23.559 Duplicate Returned Information: 0 00:27:23.559 Explicit Persistent Connection Support for Discovery: 0 00:27:23.559 Transport Requirements: 00:27:23.559 Secure Channel: Not Specified 00:27:23.559 Port ID: 1 (0x0001) 00:27:23.559 Controller ID: 65535 (0xffff) 00:27:23.559 Admin Max SQ Size: 32 00:27:23.559 Transport Service Identifier: 4420 00:27:23.559 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:23.559 Transport Address: 10.0.0.1 00:27:23.559 Discovery Log Entry 1 00:27:23.559 ---------------------- 00:27:23.559 Transport Type: 3 (TCP) 00:27:23.559 Address Family: 1 (IPv4) 00:27:23.559 Subsystem Type: 2 (NVM Subsystem) 00:27:23.559 Entry Flags: 00:27:23.559 Duplicate Returned Information: 0 00:27:23.559 Explicit Persistent Connection Support for Discovery: 0 00:27:23.559 Transport Requirements: 00:27:23.559 Secure Channel: Not Specified 00:27:23.559 Port ID: 1 (0x0001) 00:27:23.559 Controller ID: 65535 (0xffff) 00:27:23.559 Admin Max SQ Size: 32 00:27:23.559 Transport Service Identifier: 4420 00:27:23.559 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:23.559 Transport Address: 10.0.0.1 00:27:23.559 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.559 get_feature(0x01) failed 00:27:23.559 get_feature(0x02) failed 00:27:23.559 get_feature(0x04) failed 00:27:23.559 ===================================================== 00:27:23.559 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.559 ===================================================== 00:27:23.559 Controller Capabilities/Features 00:27:23.559 ================================ 00:27:23.559 Vendor ID: 0000 00:27:23.559 Subsystem Vendor ID: 0000 00:27:23.559 Serial Number: 4d66489587284214e9ef 00:27:23.559 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.559 Firmware Version: 6.8.9-20 00:27:23.559 Recommended Arb Burst: 6 00:27:23.559 IEEE OUI Identifier: 00 00 00 00:27:23.559 Multi-path I/O 00:27:23.559 May have multiple subsystem ports: Yes 00:27:23.559 May have multiple controllers: Yes 00:27:23.559 Associated with SR-IOV VF: No 00:27:23.559 Max Data Transfer Size: Unlimited 00:27:23.559 Max Number of Namespaces: 1024 00:27:23.559 Max Number of I/O Queues: 128 00:27:23.559 NVMe Specification Version (VS): 1.3 00:27:23.559 NVMe Specification Version (Identify): 1.3 00:27:23.559 Maximum Queue Entries: 1024 00:27:23.559 Contiguous Queues Required: No 00:27:23.559 Arbitration Mechanisms Supported 00:27:23.559 Weighted Round Robin: Not Supported 00:27:23.559 Vendor Specific: Not Supported 00:27:23.559 Reset Timeout: 7500 ms 00:27:23.559 Doorbell Stride: 4 bytes 00:27:23.559 NVM Subsystem Reset: Not Supported 00:27:23.559 Command Sets Supported 00:27:23.559 NVM Command Set: Supported 00:27:23.559 Boot Partition: Not Supported 00:27:23.559 Memory Page Size Minimum: 4096 bytes 00:27:23.559 Memory Page Size Maximum: 4096 bytes 00:27:23.559 Persistent Memory Region: Not Supported 00:27:23.559 Optional Asynchronous Events Supported 00:27:23.559 Namespace Attribute Notices: Supported 00:27:23.559 Firmware Activation Notices: Not Supported 00:27:23.559 ANA Change Notices: Supported 00:27:23.559 PLE Aggregate Log Change Notices: Not Supported 00:27:23.559 LBA Status Info Alert Notices: Not Supported 00:27:23.559 EGE Aggregate Log Change Notices: Not Supported 00:27:23.559 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.559 Zone Descriptor Change Notices: Not Supported 00:27:23.559 Discovery Log Change Notices: Not Supported 00:27:23.559 Controller Attributes 00:27:23.559 128-bit Host Identifier: Supported 00:27:23.559 Non-Operational Permissive Mode: Not Supported 00:27:23.559 NVM Sets: Not Supported 00:27:23.559 Read Recovery Levels: Not Supported 00:27:23.559 Endurance Groups: Not Supported 00:27:23.559 Predictable Latency Mode: Not Supported 00:27:23.559 Traffic Based Keep ALive: Supported 00:27:23.559 Namespace Granularity: Not Supported 00:27:23.559 SQ Associations: Not Supported 00:27:23.559 UUID List: Not Supported 00:27:23.559 Multi-Domain Subsystem: Not Supported 00:27:23.559 Fixed Capacity Management: Not Supported 00:27:23.559 Variable Capacity Management: Not Supported 00:27:23.559 Delete Endurance Group: Not Supported 00:27:23.559 Delete NVM Set: Not Supported 00:27:23.559 Extended LBA Formats Supported: Not Supported 00:27:23.559 Flexible Data Placement Supported: Not Supported 00:27:23.559 00:27:23.559 Controller Memory Buffer Support 00:27:23.559 ================================ 00:27:23.559 Supported: No 00:27:23.559 00:27:23.559 Persistent Memory Region Support 00:27:23.559 ================================ 00:27:23.559 Supported: No 00:27:23.559 00:27:23.559 Admin Command Set Attributes 00:27:23.559 ============================ 00:27:23.559 Security Send/Receive: Not Supported 00:27:23.559 Format NVM: Not Supported 00:27:23.559 Firmware Activate/Download: Not Supported 00:27:23.559 Namespace Management: Not Supported 00:27:23.559 Device Self-Test: Not Supported 00:27:23.559 Directives: Not Supported 00:27:23.559 NVMe-MI: Not Supported 00:27:23.559 Virtualization Management: Not Supported 00:27:23.559 Doorbell Buffer Config: Not Supported 00:27:23.559 Get LBA Status Capability: Not Supported 00:27:23.559 Command & Feature Lockdown Capability: Not Supported 00:27:23.559 Abort Command Limit: 4 00:27:23.559 Async Event Request Limit: 4 00:27:23.559 Number of Firmware Slots: N/A 00:27:23.559 Firmware Slot 1 Read-Only: N/A 00:27:23.559 Firmware Activation Without Reset: N/A 00:27:23.559 Multiple Update Detection Support: N/A 00:27:23.559 Firmware Update Granularity: No Information Provided 00:27:23.559 Per-Namespace SMART Log: Yes 00:27:23.559 Asymmetric Namespace Access Log Page: Supported 00:27:23.559 ANA Transition Time : 10 sec 00:27:23.559 00:27:23.559 Asymmetric Namespace Access Capabilities 00:27:23.559 ANA Optimized State : Supported 00:27:23.559 ANA Non-Optimized State : Supported 00:27:23.559 ANA Inaccessible State : Supported 00:27:23.559 ANA Persistent Loss State : Supported 00:27:23.559 ANA Change State : Supported 00:27:23.559 ANAGRPID is not changed : No 00:27:23.559 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:23.559 00:27:23.559 ANA Group Identifier Maximum : 128 00:27:23.559 Number of ANA Group Identifiers : 128 00:27:23.559 Max Number of Allowed Namespaces : 1024 00:27:23.559 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:23.559 Command Effects Log Page: Supported 00:27:23.559 Get Log Page Extended Data: Supported 00:27:23.559 Telemetry Log Pages: Not Supported 00:27:23.559 Persistent Event Log Pages: Not Supported 00:27:23.559 Supported Log Pages Log Page: May Support 00:27:23.559 Commands Supported & Effects Log Page: Not Supported 00:27:23.559 Feature Identifiers & Effects Log Page:May Support 00:27:23.559 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.559 Data Area 4 for Telemetry Log: Not Supported 00:27:23.559 Error Log Page Entries Supported: 128 00:27:23.559 Keep Alive: Supported 00:27:23.559 Keep Alive Granularity: 1000 ms 00:27:23.559 00:27:23.559 NVM Command Set Attributes 00:27:23.559 ========================== 00:27:23.559 Submission Queue Entry Size 00:27:23.559 Max: 64 00:27:23.559 Min: 64 00:27:23.559 Completion Queue Entry Size 00:27:23.559 Max: 16 00:27:23.559 Min: 16 00:27:23.559 Number of Namespaces: 1024 00:27:23.559 Compare Command: Not Supported 00:27:23.559 Write Uncorrectable Command: Not Supported 00:27:23.559 Dataset Management Command: Supported 00:27:23.559 Write Zeroes Command: Supported 00:27:23.559 Set Features Save Field: Not Supported 00:27:23.559 Reservations: Not Supported 00:27:23.559 Timestamp: Not Supported 00:27:23.559 Copy: Not Supported 00:27:23.560 Volatile Write Cache: Present 00:27:23.560 Atomic Write Unit (Normal): 1 00:27:23.560 Atomic Write Unit (PFail): 1 00:27:23.560 Atomic Compare & Write Unit: 1 00:27:23.560 Fused Compare & Write: Not Supported 00:27:23.560 Scatter-Gather List 00:27:23.560 SGL Command Set: Supported 00:27:23.560 SGL Keyed: Not Supported 00:27:23.560 SGL Bit Bucket Descriptor: Not Supported 00:27:23.560 SGL Metadata Pointer: Not Supported 00:27:23.560 Oversized SGL: Not Supported 00:27:23.560 SGL Metadata Address: Not Supported 00:27:23.560 SGL Offset: Supported 00:27:23.560 Transport SGL Data Block: Not Supported 00:27:23.560 Replay Protected Memory Block: Not Supported 00:27:23.560 00:27:23.560 Firmware Slot Information 00:27:23.560 ========================= 00:27:23.560 Active slot: 0 00:27:23.560 00:27:23.560 Asymmetric Namespace Access 00:27:23.560 =========================== 00:27:23.560 Change Count : 0 00:27:23.560 Number of ANA Group Descriptors : 1 00:27:23.560 ANA Group Descriptor : 0 00:27:23.560 ANA Group ID : 1 00:27:23.560 Number of NSID Values : 1 00:27:23.560 Change Count : 0 00:27:23.560 ANA State : 1 00:27:23.560 Namespace Identifier : 1 00:27:23.560 00:27:23.560 Commands Supported and Effects 00:27:23.560 ============================== 00:27:23.560 Admin Commands 00:27:23.560 -------------- 00:27:23.560 Get Log Page (02h): Supported 00:27:23.560 Identify (06h): Supported 00:27:23.560 Abort (08h): Supported 00:27:23.560 Set Features (09h): Supported 00:27:23.560 Get Features (0Ah): Supported 00:27:23.560 Asynchronous Event Request (0Ch): Supported 00:27:23.560 Keep Alive (18h): Supported 00:27:23.560 I/O Commands 00:27:23.560 ------------ 00:27:23.560 Flush (00h): Supported 00:27:23.560 Write (01h): Supported LBA-Change 00:27:23.560 Read (02h): Supported 00:27:23.560 Write Zeroes (08h): Supported LBA-Change 00:27:23.560 Dataset Management (09h): Supported 00:27:23.560 00:27:23.560 Error Log 00:27:23.560 ========= 00:27:23.560 Entry: 0 00:27:23.560 Error Count: 0x3 00:27:23.560 Submission Queue Id: 0x0 00:27:23.560 Command Id: 0x5 00:27:23.560 Phase Bit: 0 00:27:23.560 Status Code: 0x2 00:27:23.560 Status Code Type: 0x0 00:27:23.560 Do Not Retry: 1 00:27:23.560 Error Location: 0x28 00:27:23.560 LBA: 0x0 00:27:23.560 Namespace: 0x0 00:27:23.560 Vendor Log Page: 0x0 00:27:23.560 ----------- 00:27:23.560 Entry: 1 00:27:23.560 Error Count: 0x2 00:27:23.560 Submission Queue Id: 0x0 00:27:23.560 Command Id: 0x5 00:27:23.560 Phase Bit: 0 00:27:23.560 Status Code: 0x2 00:27:23.560 Status Code Type: 0x0 00:27:23.560 Do Not Retry: 1 00:27:23.560 Error Location: 0x28 00:27:23.560 LBA: 0x0 00:27:23.560 Namespace: 0x0 00:27:23.560 Vendor Log Page: 0x0 00:27:23.560 ----------- 00:27:23.560 Entry: 2 00:27:23.560 Error Count: 0x1 00:27:23.560 Submission Queue Id: 0x0 00:27:23.560 Command Id: 0x4 00:27:23.560 Phase Bit: 0 00:27:23.560 Status Code: 0x2 00:27:23.560 Status Code Type: 0x0 00:27:23.560 Do Not Retry: 1 00:27:23.560 Error Location: 0x28 00:27:23.560 LBA: 0x0 00:27:23.560 Namespace: 0x0 00:27:23.560 Vendor Log Page: 0x0 00:27:23.560 00:27:23.560 Number of Queues 00:27:23.560 ================ 00:27:23.560 Number of I/O Submission Queues: 128 00:27:23.560 Number of I/O Completion Queues: 128 00:27:23.560 00:27:23.560 ZNS Specific Controller Data 00:27:23.560 ============================ 00:27:23.560 Zone Append Size Limit: 0 00:27:23.560 00:27:23.560 00:27:23.560 Active Namespaces 00:27:23.560 ================= 00:27:23.560 get_feature(0x05) failed 00:27:23.560 Namespace ID:1 00:27:23.560 Command Set Identifier: NVM (00h) 00:27:23.560 Deallocate: Supported 00:27:23.560 Deallocated/Unwritten Error: Not Supported 00:27:23.560 Deallocated Read Value: Unknown 00:27:23.560 Deallocate in Write Zeroes: Not Supported 00:27:23.560 Deallocated Guard Field: 0xFFFF 00:27:23.560 Flush: Supported 00:27:23.560 Reservation: Not Supported 00:27:23.560 Namespace Sharing Capabilities: Multiple Controllers 00:27:23.560 Size (in LBAs): 3750748848 (1788GiB) 00:27:23.560 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:23.560 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:23.560 UUID: df3e2933-08bd-40d9-bfaa-bf98e8e0c0f4 00:27:23.560 Thin Provisioning: Not Supported 00:27:23.560 Per-NS Atomic Units: Yes 00:27:23.560 Atomic Write Unit (Normal): 8 00:27:23.560 Atomic Write Unit (PFail): 8 00:27:23.560 Preferred Write Granularity: 8 00:27:23.560 Atomic Compare & Write Unit: 8 00:27:23.560 Atomic Boundary Size (Normal): 0 00:27:23.560 Atomic Boundary Size (PFail): 0 00:27:23.560 Atomic Boundary Offset: 0 00:27:23.560 NGUID/EUI64 Never Reused: No 00:27:23.560 ANA group ID: 1 00:27:23.560 Namespace Write Protected: No 00:27:23.560 Number of LBA Formats: 1 00:27:23.560 Current LBA Format: LBA Format #00 00:27:23.560 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:23.560 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.560 rmmod nvme_tcp 00:27:23.560 rmmod nvme_fabrics 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.560 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.821 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.821 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.821 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.821 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.821 11:42:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:25.732 11:42:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.944 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.944 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:29.944 00:27:29.944 real 0m19.452s 00:27:29.944 user 0m5.134s 00:27:29.944 sys 0m11.366s 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.944 ************************************ 00:27:29.944 END TEST nvmf_identify_kernel_target 00:27:29.944 ************************************ 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.944 ************************************ 00:27:29.944 START TEST nvmf_auth_host 00:27:29.944 ************************************ 00:27:29.944 11:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.944 * Looking for test storage... 00:27:29.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.944 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:29.944 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:29.944 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.206 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:30.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.207 --rc genhtml_branch_coverage=1 00:27:30.207 --rc genhtml_function_coverage=1 00:27:30.207 --rc genhtml_legend=1 00:27:30.207 --rc geninfo_all_blocks=1 00:27:30.207 --rc geninfo_unexecuted_blocks=1 00:27:30.207 00:27:30.207 ' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:30.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.207 --rc genhtml_branch_coverage=1 00:27:30.207 --rc genhtml_function_coverage=1 00:27:30.207 --rc genhtml_legend=1 00:27:30.207 --rc geninfo_all_blocks=1 00:27:30.207 --rc geninfo_unexecuted_blocks=1 00:27:30.207 00:27:30.207 ' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:30.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.207 --rc genhtml_branch_coverage=1 00:27:30.207 --rc genhtml_function_coverage=1 00:27:30.207 --rc genhtml_legend=1 00:27:30.207 --rc geninfo_all_blocks=1 00:27:30.207 --rc geninfo_unexecuted_blocks=1 00:27:30.207 00:27:30.207 ' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:30.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.207 --rc genhtml_branch_coverage=1 00:27:30.207 --rc genhtml_function_coverage=1 00:27:30.207 --rc genhtml_legend=1 00:27:30.207 --rc geninfo_all_blocks=1 00:27:30.207 --rc geninfo_unexecuted_blocks=1 00:27:30.207 00:27:30.207 ' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.207 11:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:38.356 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:38.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:38.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:38.357 Found net devices under 0000:31:00.0: cvl_0_0 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:38.357 Found net devices under 0000:31:00.1: cvl_0_1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:27:38.357 00:27:38.357 --- 10.0.0.2 ping statistics --- 00:27:38.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.357 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:27:38.357 00:27:38.357 --- 10.0.0.1 ping statistics --- 00:27:38.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.357 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.357 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3682046 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3682046 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3682046 ']' 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.358 11:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3566cbd0518302ead4f2bc8710c59479 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CZJ 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3566cbd0518302ead4f2bc8710c59479 0 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3566cbd0518302ead4f2bc8710c59479 0 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3566cbd0518302ead4f2bc8710c59479 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.620 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CZJ 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CZJ 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CZJ 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3cf2c08d0744b76e2f108ba24f24163cba176f4442737a7b8402fb3e0eea0ecf 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jVu 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3cf2c08d0744b76e2f108ba24f24163cba176f4442737a7b8402fb3e0eea0ecf 3 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3cf2c08d0744b76e2f108ba24f24163cba176f4442737a7b8402fb3e0eea0ecf 3 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3cf2c08d0744b76e2f108ba24f24163cba176f4442737a7b8402fb3e0eea0ecf 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jVu 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jVu 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jVu 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=00063c49e91e9ed5453151a91a344a65ea95847db03d761a 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9zw 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 00063c49e91e9ed5453151a91a344a65ea95847db03d761a 0 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 00063c49e91e9ed5453151a91a344a65ea95847db03d761a 0 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=00063c49e91e9ed5453151a91a344a65ea95847db03d761a 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9zw 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9zw 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9zw 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25c3173385b5f7c47a2c3a96c0699919609992f59204d7a6 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.c3F 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25c3173385b5f7c47a2c3a96c0699919609992f59204d7a6 2 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25c3173385b5f7c47a2c3a96c0699919609992f59204d7a6 2 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25c3173385b5f7c47a2c3a96c0699919609992f59204d7a6 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:38.883 11:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.c3F 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.c3F 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.c3F 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=118aa1fc8ad5b5fac2b04a0bf421acf4 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ic7 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 118aa1fc8ad5b5fac2b04a0bf421acf4 1 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 118aa1fc8ad5b5fac2b04a0bf421acf4 1 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=118aa1fc8ad5b5fac2b04a0bf421acf4 00:27:38.883 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:38.884 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:39.145 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ic7 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ic7 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ic7 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a46edfdea8388cdc6eb9c3d65ad2b078 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lzf 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a46edfdea8388cdc6eb9c3d65ad2b078 1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a46edfdea8388cdc6eb9c3d65ad2b078 1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a46edfdea8388cdc6eb9c3d65ad2b078 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lzf 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lzf 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lzf 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad566363cab6b8f2f46032513c5e6cd0afe5d09305f80c13 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6rU 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad566363cab6b8f2f46032513c5e6cd0afe5d09305f80c13 2 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad566363cab6b8f2f46032513c5e6cd0afe5d09305f80c13 2 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad566363cab6b8f2f46032513c5e6cd0afe5d09305f80c13 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6rU 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6rU 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6rU 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a6740f2265f779fbd2e4193f33a6cbb 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yBD 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a6740f2265f779fbd2e4193f33a6cbb 0 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a6740f2265f779fbd2e4193f33a6cbb 0 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a6740f2265f779fbd2e4193f33a6cbb 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yBD 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yBD 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yBD 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0cf9d2f7976248d009ef1f8e2fe7153876c7b4cb7ca593015820feed5a60d2ef 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7I3 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0cf9d2f7976248d009ef1f8e2fe7153876c7b4cb7ca593015820feed5a60d2ef 3 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0cf9d2f7976248d009ef1f8e2fe7153876c7b4cb7ca593015820feed5a60d2ef 3 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0cf9d2f7976248d009ef1f8e2fe7153876c7b4cb7ca593015820feed5a60d2ef 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:39.146 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7I3 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7I3 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7I3 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3682046 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3682046 ']' 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CZJ 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jVu ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jVu 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9zw 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.c3F ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.c3F 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ic7 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.408 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lzf ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lzf 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6rU 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yBD ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yBD 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7I3 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.670 11:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.974 Waiting for block devices as requested 00:27:42.974 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.974 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.237 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.237 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.237 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.497 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.497 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.497 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:43.759 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.759 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.020 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.020 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:44.020 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:44.020 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:44.280 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:44.280 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:44.280 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:45.225 No valid GPT data, bailing 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:45.225 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:45.486 00:27:45.486 Discovery Log Number of Records 2, Generation counter 2 00:27:45.486 =====Discovery Log Entry 0====== 00:27:45.486 trtype: tcp 00:27:45.486 adrfam: ipv4 00:27:45.486 subtype: current discovery subsystem 00:27:45.486 treq: not specified, sq flow control disable supported 00:27:45.486 portid: 1 00:27:45.486 trsvcid: 4420 00:27:45.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:45.486 traddr: 10.0.0.1 00:27:45.486 eflags: none 00:27:45.486 sectype: none 00:27:45.486 =====Discovery Log Entry 1====== 00:27:45.486 trtype: tcp 00:27:45.486 adrfam: ipv4 00:27:45.486 subtype: nvme subsystem 00:27:45.486 treq: not specified, sq flow control disable supported 00:27:45.486 portid: 1 00:27:45.486 trsvcid: 4420 00:27:45.486 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:45.486 traddr: 10.0.0.1 00:27:45.486 eflags: none 00:27:45.486 sectype: none 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.486 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.487 nvme0n1 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.487 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.749 nvme0n1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.749 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.750 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.011 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.011 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.011 11:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.011 nvme0n1 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.011 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.012 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.012 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.012 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.273 nvme0n1 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.273 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.534 nvme0n1 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.534 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.805 nvme0n1 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.805 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.806 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.806 11:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.067 nvme0n1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.067 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.328 nvme0n1 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.328 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.590 nvme0n1 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.590 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.852 nvme0n1 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.852 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.114 nvme0n1 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.114 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.375 nvme0n1 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.375 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.636 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.637 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.898 nvme0n1 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.898 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.159 nvme0n1 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.159 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.160 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.421 nvme0n1 00:27:49.421 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.421 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.421 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.421 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.421 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.682 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.682 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.682 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.683 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.944 nvme0n1 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.944 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.945 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 nvme0n1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.090 nvme0n1 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.091 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.091 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.663 nvme0n1 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.663 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.664 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.925 nvme0n1 00:27:51.925 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.925 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.925 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.925 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.925 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.187 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.759 nvme0n1 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.759 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.760 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.760 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.760 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.760 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.760 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.331 nvme0n1 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.331 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.592 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.593 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.165 nvme0n1 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.165 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.427 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.998 nvme0n1 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.998 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.260 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.832 nvme0n1 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.832 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:56.092 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.093 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.664 nvme0n1 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.664 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.925 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.926 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.926 nvme0n1 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.926 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.187 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.188 nvme0n1 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:57.188 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:57.449 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 nvme0n1 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.450 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.712 nvme0n1 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.712 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.713 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 nvme0n1 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.973 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.235 nvme0n1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.235 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.497 nvme0n1 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.497 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.498 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.759 nvme0n1 00:27:58.759 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.759 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.759 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.759 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.759 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.760 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.022 nvme0n1 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.022 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.283 nvme0n1 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.283 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.855 nvme0n1 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.855 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.856 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.117 nvme0n1 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.117 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.118 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.379 nvme0n1 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.379 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.380 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.641 nvme0n1 00:28:00.641 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.902 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.903 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.163 nvme0n1 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.163 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.733 nvme0n1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.734 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.305 nvme0n1 00:28:02.305 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.306 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.892 nvme0n1 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.892 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.472 nvme0n1 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.472 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.043 nvme0n1 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.043 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.044 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.615 nvme0n1 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.615 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.878 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.452 nvme0n1 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.452 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.712 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.284 nvme0n1 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.284 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.546 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 nvme0n1 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.119 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.380 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.952 nvme0n1 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.952 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.953 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.213 nvme0n1 00:28:08.213 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.214 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.476 nvme0n1 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.476 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.737 nvme0n1 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.737 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.738 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.999 nvme0n1 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.999 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.999 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.261 nvme0n1 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.261 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.523 nvme0n1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.523 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.784 nvme0n1 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.784 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.785 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.046 nvme0n1 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:10.046 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.047 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.308 nvme0n1 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.308 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.570 nvme0n1 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.570 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.831 nvme0n1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.831 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.404 nvme0n1 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.404 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.666 nvme0n1 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.666 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.667 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.928 nvme0n1 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.928 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.928 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.189 nvme0n1 00:28:12.189 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.449 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.019 nvme0n1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.019 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.591 nvme0n1 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.591 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.852 nvme0n1 00:28:13.852 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.852 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.852 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.852 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.852 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.852 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:14.112 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.113 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.374 nvme0n1 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.374 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.636 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.637 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.903 nvme0n1 00:28:14.903 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU2NmNiZDA1MTgzMDJlYWQ0ZjJiYzg3MTBjNTk0NzmhNaCA: 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: ]] 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2NmMmMwOGQwNzQ0Yjc2ZTJmMTA4YmEyNGYyNDE2M2NiYTE3NmY0NDQyNzM3YTdiODQwMmZiM2UwZWVhMGVjZnvAAec=: 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.164 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.165 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.107 nvme0n1 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.107 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.108 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.679 nvme0n1 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.679 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.680 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.623 nvme0n1 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWQ1NjYzNjNjYWI2YjhmMmY0NjAzMjUxM2M1ZTZjZDBhZmU1ZDA5MzA1ZjgwYzEz496/vQ==: 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWE2NzQwZjIyNjVmNzc5ZmJkMmU0MTkzZjMzYTZjYmLcpgIl: 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.623 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.624 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.624 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.624 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 nvme0n1 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGNmOWQyZjc5NzYyNDhkMDA5ZWYxZjhlMmZlNzE1Mzg3NmM3YjRjYjdjYTU5MzAxNTgyMGZlZWQ1YTYwZDJlZu9q480=: 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.568 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.142 nvme0n1 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.142 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.404 request: 00:28:19.404 { 00:28:19.404 "name": "nvme0", 00:28:19.404 "trtype": "tcp", 00:28:19.404 "traddr": "10.0.0.1", 00:28:19.404 "adrfam": "ipv4", 00:28:19.404 "trsvcid": "4420", 00:28:19.404 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.404 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.404 "prchk_reftag": false, 00:28:19.404 "prchk_guard": false, 00:28:19.404 "hdgst": false, 00:28:19.404 "ddgst": false, 00:28:19.404 "allow_unrecognized_csi": false, 00:28:19.404 "method": "bdev_nvme_attach_controller", 00:28:19.404 "req_id": 1 00:28:19.404 } 00:28:19.404 Got JSON-RPC error response 00:28:19.404 response: 00:28:19.404 { 00:28:19.404 "code": -5, 00:28:19.404 "message": "Input/output error" 00:28:19.404 } 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.404 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.405 request: 00:28:19.405 { 00:28:19.405 "name": "nvme0", 00:28:19.405 "trtype": "tcp", 00:28:19.405 "traddr": "10.0.0.1", 00:28:19.405 "adrfam": "ipv4", 00:28:19.405 "trsvcid": "4420", 00:28:19.405 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.405 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.405 "prchk_reftag": false, 00:28:19.405 "prchk_guard": false, 00:28:19.405 "hdgst": false, 00:28:19.405 "ddgst": false, 00:28:19.405 "dhchap_key": "key2", 00:28:19.405 "allow_unrecognized_csi": false, 00:28:19.405 "method": "bdev_nvme_attach_controller", 00:28:19.405 "req_id": 1 00:28:19.405 } 00:28:19.405 Got JSON-RPC error response 00:28:19.405 response: 00:28:19.405 { 00:28:19.405 "code": -5, 00:28:19.405 "message": "Input/output error" 00:28:19.405 } 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.405 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 request: 00:28:19.666 { 00:28:19.666 "name": "nvme0", 00:28:19.666 "trtype": "tcp", 00:28:19.666 "traddr": "10.0.0.1", 00:28:19.666 "adrfam": "ipv4", 00:28:19.666 "trsvcid": "4420", 00:28:19.666 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.666 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.666 "prchk_reftag": false, 00:28:19.666 "prchk_guard": false, 00:28:19.666 "hdgst": false, 00:28:19.666 "ddgst": false, 00:28:19.666 "dhchap_key": "key1", 00:28:19.666 "dhchap_ctrlr_key": "ckey2", 00:28:19.666 "allow_unrecognized_csi": false, 00:28:19.666 "method": "bdev_nvme_attach_controller", 00:28:19.666 "req_id": 1 00:28:19.666 } 00:28:19.666 Got JSON-RPC error response 00:28:19.666 response: 00:28:19.666 { 00:28:19.666 "code": -5, 00:28:19.666 "message": "Input/output error" 00:28:19.666 } 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.666 nvme0n1 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.666 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.667 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.927 request: 00:28:19.927 { 00:28:19.927 "name": "nvme0", 00:28:19.927 "dhchap_key": "key1", 00:28:19.927 "dhchap_ctrlr_key": "ckey2", 00:28:19.927 "method": "bdev_nvme_set_keys", 00:28:19.927 "req_id": 1 00:28:19.927 } 00:28:19.927 Got JSON-RPC error response 00:28:19.927 response: 00:28:19.927 { 00:28:19.927 "code": -13, 00:28:19.927 "message": "Permission denied" 00:28:19.927 } 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.927 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:19.928 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:20.869 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.869 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:20.869 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.869 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.869 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.869 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:20.869 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAwNjNjNDllOTFlOWVkNTQ1MzE1MWE5MWEzNDRhNjVlYTk1ODQ3ZGIwM2Q3NjFheiYsJQ==: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVjMzE3MzM4NWI1ZjdjNDdhMmMzYTk2YzA2OTk5MTk2MDk5OTJmNTkyMDRkN2E2OW7AFw==: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.255 nvme0n1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE4YWExZmM4YWQ1YjVmYWMyYjA0YTBiZjQyMWFjZjRp8bJr: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ2ZWRmZGVhODM4OGNkYzZlYjljM2Q2NWFkMmIwNzjvJ0Ma: 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.255 request: 00:28:22.255 { 00:28:22.255 "name": "nvme0", 00:28:22.255 "dhchap_key": "key2", 00:28:22.255 "dhchap_ctrlr_key": "ckey1", 00:28:22.255 "method": "bdev_nvme_set_keys", 00:28:22.255 "req_id": 1 00:28:22.255 } 00:28:22.255 Got JSON-RPC error response 00:28:22.255 response: 00:28:22.255 { 00:28:22.255 "code": -13, 00:28:22.255 "message": "Permission denied" 00:28:22.255 } 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:22.255 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.637 rmmod nvme_tcp 00:28:23.637 rmmod nvme_fabrics 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3682046 ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3682046 ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682046' 00:28:23.637 killing process with pid 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3682046 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.637 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:26.179 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:26.180 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:26.180 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:26.180 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:29.482 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.482 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:29.743 11:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CZJ /tmp/spdk.key-null.9zw /tmp/spdk.key-sha256.ic7 /tmp/spdk.key-sha384.6rU /tmp/spdk.key-sha512.7I3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:29.743 11:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:33.045 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:33.045 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:33.045 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:33.305 00:28:33.305 real 1m3.490s 00:28:33.305 user 0m57.165s 00:28:33.305 sys 0m15.982s 00:28:33.305 11:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.305 11:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.305 ************************************ 00:28:33.305 END TEST nvmf_auth_host 00:28:33.305 ************************************ 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.566 ************************************ 00:28:33.566 START TEST nvmf_digest 00:28:33.566 ************************************ 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:33.566 * Looking for test storage... 00:28:33.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.566 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.828 --rc genhtml_branch_coverage=1 00:28:33.828 --rc genhtml_function_coverage=1 00:28:33.828 --rc genhtml_legend=1 00:28:33.828 --rc geninfo_all_blocks=1 00:28:33.828 --rc geninfo_unexecuted_blocks=1 00:28:33.828 00:28:33.828 ' 00:28:33.828 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:33.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.828 --rc genhtml_branch_coverage=1 00:28:33.828 --rc genhtml_function_coverage=1 00:28:33.828 --rc genhtml_legend=1 00:28:33.828 --rc geninfo_all_blocks=1 00:28:33.829 --rc geninfo_unexecuted_blocks=1 00:28:33.829 00:28:33.829 ' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:33.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.829 --rc genhtml_branch_coverage=1 00:28:33.829 --rc genhtml_function_coverage=1 00:28:33.829 --rc genhtml_legend=1 00:28:33.829 --rc geninfo_all_blocks=1 00:28:33.829 --rc geninfo_unexecuted_blocks=1 00:28:33.829 00:28:33.829 ' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:33.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.829 --rc genhtml_branch_coverage=1 00:28:33.829 --rc genhtml_function_coverage=1 00:28:33.829 --rc genhtml_legend=1 00:28:33.829 --rc geninfo_all_blocks=1 00:28:33.829 --rc geninfo_unexecuted_blocks=1 00:28:33.829 00:28:33.829 ' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.829 11:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.970 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:41.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:41.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:41.971 Found net devices under 0000:31:00.0: cvl_0_0 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:41.971 Found net devices under 0000:31:00.1: cvl_0_1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.971 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:28:41.971 00:28:41.971 --- 10.0.0.2 ping statistics --- 00:28:41.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.971 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:41.971 00:28:41.971 --- 10.0.0.1 ping statistics --- 00:28:41.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.971 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.971 ************************************ 00:28:41.971 START TEST nvmf_digest_clean 00:28:41.971 ************************************ 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3699765 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3699765 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3699765 ']' 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.971 11:43:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.971 [2024-12-09 11:43:33.241997] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:41.972 [2024-12-09 11:43:33.242061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.972 [2024-12-09 11:43:33.322088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.972 [2024-12-09 11:43:33.356694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.972 [2024-12-09 11:43:33.356724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.972 [2024-12-09 11:43:33.356732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.972 [2024-12-09 11:43:33.356739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.972 [2024-12-09 11:43:33.356745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.972 [2024-12-09 11:43:33.357284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.972 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.972 null0 00:28:41.972 [2024-12-09 11:43:34.126804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.233 [2024-12-09 11:43:34.151009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3699857 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3699857 /var/tmp/bperf.sock 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3699857 ']' 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.233 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.233 [2024-12-09 11:43:34.206713] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:42.233 [2024-12-09 11:43:34.206761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699857 ] 00:28:42.233 [2024-12-09 11:43:34.294683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.233 [2024-12-09 11:43:34.330539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.175 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.175 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:43.175 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:43.175 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:43.175 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:43.175 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.175 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.745 nvme0n1 00:28:43.745 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:43.745 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.745 Running I/O for 2 seconds... 00:28:45.628 19601.00 IOPS, 76.57 MiB/s [2024-12-09T10:43:37.790Z] 19546.00 IOPS, 76.35 MiB/s 00:28:45.628 Latency(us) 00:28:45.628 [2024-12-09T10:43:37.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.628 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.628 nvme0n1 : 2.00 19556.13 76.39 0.00 0.00 6537.63 2908.16 21736.11 00:28:45.628 [2024-12-09T10:43:37.790Z] =================================================================================================================== 00:28:45.628 [2024-12-09T10:43:37.790Z] Total : 19556.13 76.39 0.00 0.00 6537.63 2908.16 21736.11 00:28:45.628 { 00:28:45.628 "results": [ 00:28:45.628 { 00:28:45.628 "job": "nvme0n1", 00:28:45.628 "core_mask": "0x2", 00:28:45.628 "workload": "randread", 00:28:45.628 "status": "finished", 00:28:45.628 "queue_depth": 128, 00:28:45.628 "io_size": 4096, 00:28:45.628 "runtime": 2.003617, 00:28:45.628 "iops": 19556.13273395065, 00:28:45.628 "mibps": 76.39114349199473, 00:28:45.628 "io_failed": 0, 00:28:45.628 "io_timeout": 0, 00:28:45.628 "avg_latency_us": 6537.630168185182, 00:28:45.628 "min_latency_us": 2908.16, 00:28:45.628 "max_latency_us": 21736.106666666667 00:28:45.628 } 00:28:45.628 ], 00:28:45.628 "core_count": 1 00:28:45.628 } 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.890 | select(.opcode=="crc32c") 00:28:45.890 | "\(.module_name) \(.executed)"' 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3699857 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3699857 ']' 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3699857 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.890 11:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699857 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699857' 00:28:46.150 killing process with pid 3699857 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3699857 00:28:46.150 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.150 00:28:46.150 Latency(us) 00:28:46.150 [2024-12-09T10:43:38.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.150 [2024-12-09T10:43:38.312Z] =================================================================================================================== 00:28:46.150 [2024-12-09T10:43:38.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3699857 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.150 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3700675 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3700675 /var/tmp/bperf.sock 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3700675 ']' 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.151 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.151 [2024-12-09 11:43:38.212295] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:46.151 [2024-12-09 11:43:38.212352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700675 ] 00:28:46.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.151 Zero copy mechanism will not be used. 00:28:46.151 [2024-12-09 11:43:38.295670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.411 [2024-12-09 11:43:38.325263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.983 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.983 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:46.983 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:46.983 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:46.983 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.243 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.243 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.503 nvme0n1 00:28:47.503 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:47.503 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.503 Zero copy mechanism will not be used. 00:28:47.503 Running I/O for 2 seconds... 00:28:49.832 3853.00 IOPS, 481.62 MiB/s [2024-12-09T10:43:41.994Z] 3808.00 IOPS, 476.00 MiB/s 00:28:49.832 Latency(us) 00:28:49.832 [2024-12-09T10:43:41.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.832 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.832 nvme0n1 : 2.00 3811.96 476.49 0.00 0.00 4194.73 1146.88 7973.55 00:28:49.832 [2024-12-09T10:43:41.994Z] =================================================================================================================== 00:28:49.832 [2024-12-09T10:43:41.994Z] Total : 3811.96 476.49 0.00 0.00 4194.73 1146.88 7973.55 00:28:49.832 { 00:28:49.832 "results": [ 00:28:49.832 { 00:28:49.832 "job": "nvme0n1", 00:28:49.832 "core_mask": "0x2", 00:28:49.832 "workload": "randread", 00:28:49.832 "status": "finished", 00:28:49.832 "queue_depth": 16, 00:28:49.832 "io_size": 131072, 00:28:49.832 "runtime": 2.002121, 00:28:49.832 "iops": 3811.957419156984, 00:28:49.832 "mibps": 476.494677394623, 00:28:49.832 "io_failed": 0, 00:28:49.832 "io_timeout": 0, 00:28:49.832 "avg_latency_us": 4194.731292802236, 00:28:49.832 "min_latency_us": 1146.88, 00:28:49.832 "max_latency_us": 7973.546666666667 00:28:49.832 } 00:28:49.832 ], 00:28:49.832 "core_count": 1 00:28:49.832 } 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.832 | select(.opcode=="crc32c") 00:28:49.832 | "\(.module_name) \(.executed)"' 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3700675 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3700675 ']' 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3700675 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700675 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700675' 00:28:49.832 killing process with pid 3700675 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3700675 00:28:49.832 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.832 00:28:49.832 Latency(us) 00:28:49.832 [2024-12-09T10:43:41.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.832 [2024-12-09T10:43:41.994Z] =================================================================================================================== 00:28:49.832 [2024-12-09T10:43:41.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3700675 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3701462 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3701462 /var/tmp/bperf.sock 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3701462 ']' 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.832 11:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:50.093 [2024-12-09 11:43:42.001947] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:50.094 [2024-12-09 11:43:42.002007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701462 ] 00:28:50.094 [2024-12-09 11:43:42.088726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.094 [2024-12-09 11:43:42.118476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.665 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.665 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:50.665 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.665 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.665 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.926 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.927 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.499 nvme0n1 00:28:51.499 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.499 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.499 Running I/O for 2 seconds... 00:28:53.384 21570.00 IOPS, 84.26 MiB/s [2024-12-09T10:43:45.546Z] 21726.00 IOPS, 84.87 MiB/s 00:28:53.384 Latency(us) 00:28:53.384 [2024-12-09T10:43:45.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.384 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.384 nvme0n1 : 2.01 21741.68 84.93 0.00 0.00 5878.55 2116.27 15182.51 00:28:53.384 [2024-12-09T10:43:45.546Z] =================================================================================================================== 00:28:53.384 [2024-12-09T10:43:45.546Z] Total : 21741.68 84.93 0.00 0.00 5878.55 2116.27 15182.51 00:28:53.384 { 00:28:53.384 "results": [ 00:28:53.384 { 00:28:53.384 "job": "nvme0n1", 00:28:53.384 "core_mask": "0x2", 00:28:53.384 "workload": "randwrite", 00:28:53.384 "status": "finished", 00:28:53.384 "queue_depth": 128, 00:28:53.384 "io_size": 4096, 00:28:53.384 "runtime": 2.006009, 00:28:53.384 "iops": 21741.677131059732, 00:28:53.384 "mibps": 84.92842629320208, 00:28:53.384 "io_failed": 0, 00:28:53.384 "io_timeout": 0, 00:28:53.384 "avg_latency_us": 5878.545127711285, 00:28:53.384 "min_latency_us": 2116.266666666667, 00:28:53.384 "max_latency_us": 15182.506666666666 00:28:53.384 } 00:28:53.384 ], 00:28:53.384 "core_count": 1 00:28:53.384 } 00:28:53.384 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.384 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:53.384 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.384 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.384 | select(.opcode=="crc32c") 00:28:53.384 | "\(.module_name) \(.executed)"' 00:28:53.384 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3701462 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3701462 ']' 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3701462 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701462 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701462' 00:28:53.644 killing process with pid 3701462 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3701462 00:28:53.644 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.644 00:28:53.644 Latency(us) 00:28:53.644 [2024-12-09T10:43:45.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.644 [2024-12-09T10:43:45.806Z] =================================================================================================================== 00:28:53.644 [2024-12-09T10:43:45.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.644 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3701462 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3702234 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3702234 /var/tmp/bperf.sock 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3702234 ']' 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.904 11:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.904 [2024-12-09 11:43:45.925793] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:53.904 [2024-12-09 11:43:45.925852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702234 ] 00:28:53.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.904 Zero copy mechanism will not be used. 00:28:53.904 [2024-12-09 11:43:46.008818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.904 [2024-12-09 11:43:46.038394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.847 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.108 nvme0n1 00:28:55.108 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.108 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.369 Zero copy mechanism will not be used. 00:28:55.369 Running I/O for 2 seconds... 00:28:57.255 4778.00 IOPS, 597.25 MiB/s [2024-12-09T10:43:49.417Z] 4641.00 IOPS, 580.12 MiB/s 00:28:57.255 Latency(us) 00:28:57.255 [2024-12-09T10:43:49.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.255 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:57.255 nvme0n1 : 2.00 4643.67 580.46 0.00 0.00 3441.67 1597.44 8901.97 00:28:57.255 [2024-12-09T10:43:49.417Z] =================================================================================================================== 00:28:57.255 [2024-12-09T10:43:49.417Z] Total : 4643.67 580.46 0.00 0.00 3441.67 1597.44 8901.97 00:28:57.255 { 00:28:57.255 "results": [ 00:28:57.255 { 00:28:57.255 "job": "nvme0n1", 00:28:57.255 "core_mask": "0x2", 00:28:57.255 "workload": "randwrite", 00:28:57.255 "status": "finished", 00:28:57.255 "queue_depth": 16, 00:28:57.255 "io_size": 131072, 00:28:57.255 "runtime": 2.003156, 00:28:57.255 "iops": 4643.672285134058, 00:28:57.255 "mibps": 580.4590356417573, 00:28:57.255 "io_failed": 0, 00:28:57.255 "io_timeout": 0, 00:28:57.255 "avg_latency_us": 3441.667816240235, 00:28:57.255 "min_latency_us": 1597.44, 00:28:57.255 "max_latency_us": 8901.973333333333 00:28:57.255 } 00:28:57.255 ], 00:28:57.255 "core_count": 1 00:28:57.255 } 00:28:57.255 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.255 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.255 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.255 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.255 | select(.opcode=="crc32c") 00:28:57.255 | "\(.module_name) \(.executed)"' 00:28:57.255 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3702234 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3702234 ']' 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3702234 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702234 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702234' 00:28:57.516 killing process with pid 3702234 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3702234 00:28:57.516 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.516 00:28:57.516 Latency(us) 00:28:57.516 [2024-12-09T10:43:49.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.516 [2024-12-09T10:43:49.678Z] =================================================================================================================== 00:28:57.516 [2024-12-09T10:43:49.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3702234 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3699765 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3699765 ']' 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3699765 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.516 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699765 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699765' 00:28:57.778 killing process with pid 3699765 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3699765 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3699765 00:28:57.778 00:28:57.778 real 0m16.660s 00:28:57.778 user 0m33.073s 00:28:57.778 sys 0m3.499s 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.778 ************************************ 00:28:57.778 END TEST nvmf_digest_clean 00:28:57.778 ************************************ 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.778 ************************************ 00:28:57.778 START TEST nvmf_digest_error 00:28:57.778 ************************************ 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3702945 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3702945 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3702945 ']' 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.778 11:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.039 [2024-12-09 11:43:49.974198] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:58.039 [2024-12-09 11:43:49.974246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.039 [2024-12-09 11:43:50.053615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.039 [2024-12-09 11:43:50.088746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.039 [2024-12-09 11:43:50.088780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.039 [2024-12-09 11:43:50.088788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.039 [2024-12-09 11:43:50.088795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.039 [2024-12-09 11:43:50.088801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.039 [2024-12-09 11:43:50.089360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.612 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.612 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:58.612 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.612 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.612 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.873 [2024-12-09 11:43:50.799397] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.873 null0 00:28:58.873 [2024-12-09 11:43:50.883231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.873 [2024-12-09 11:43:50.907440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3703292 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3703292 /var/tmp/bperf.sock 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3703292 ']' 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.873 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.873 [2024-12-09 11:43:50.965261] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:28:58.873 [2024-12-09 11:43:50.965307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703292 ] 00:28:59.134 [2024-12-09 11:43:51.048684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.134 [2024-12-09 11:43:51.078830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.705 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.705 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:59.705 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.705 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.966 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.227 nvme0n1 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:00.227 11:43:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.227 Running I/O for 2 seconds... 00:29:00.227 [2024-12-09 11:43:52.295495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.295529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.295538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.308456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.308476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.308484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.319493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.319512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.319519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.332329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.332348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.332355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.343022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.343046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.356115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.356132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.370492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.370508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.370515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.227 [2024-12-09 11:43:52.382298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.227 [2024-12-09 11:43:52.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.227 [2024-12-09 11:43:52.382322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.488 [2024-12-09 11:43:52.395762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.488 [2024-12-09 11:43:52.395780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.488 [2024-12-09 11:43:52.395787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.408153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.408169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.408180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.422042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.422059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.437496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.437513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.437520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.450725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.450742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.450749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.463096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.463113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.463120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.474155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.474172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.474178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.487222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.487239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.487246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.500303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.500320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.500326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.512670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.512688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.525429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.525448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.525455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.536265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.536282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.536289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.547700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.547717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.547724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.561270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.561287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.561294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.573891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.573908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.573914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.585985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.586002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.586009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.599505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.599522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.599529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.611083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.611100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.611107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.624876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.624893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.624899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.489 [2024-12-09 11:43:52.638035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.489 [2024-12-09 11:43:52.638053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.489 [2024-12-09 11:43:52.638059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.652752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.652770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.664768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.664785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.664791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.674758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.674775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.674782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.689408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.689425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.702297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.702313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.702320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.714889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.714905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.714912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.728399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.728416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.728423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.741577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.741594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.741603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.752474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.752491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.752497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.764938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.764955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.764961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.779006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.779028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.779034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.790581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.790598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.750 [2024-12-09 11:43:52.790604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.750 [2024-12-09 11:43:52.802049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.750 [2024-12-09 11:43:52.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.802073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.814791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.814808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.814814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.828105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.828121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.828127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.839828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.839845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.839851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.853391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.853410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.853417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.866207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.866223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.866230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.878120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.878136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.878143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.890746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.890763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.890769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.751 [2024-12-09 11:43:52.902096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:00.751 [2024-12-09 11:43:52.902112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.751 [2024-12-09 11:43:52.902119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.011 [2024-12-09 11:43:52.915545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.011 [2024-12-09 11:43:52.915563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.915569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.928056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.928073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.928079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.941264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.941281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.941287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.952290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.952307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.952313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.965881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.965897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.965904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.978505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.978521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.978527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:52.992013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:52.992031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:52.992037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.003930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.003946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.003953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.016765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.016782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.016788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.028799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.028816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.028822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.042282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.042298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.042304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.053088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.053111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.065554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.065580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.077508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.090632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.090649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.090655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.103625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.103642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.103648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.116265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.116281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.126651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.126667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.126674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.140300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.140317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.140323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.154971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.154993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.012 [2024-12-09 11:43:53.168075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.012 [2024-12-09 11:43:53.168091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.012 [2024-12-09 11:43:53.168097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.177668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.177685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.177691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.190790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.190806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.190813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.204289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.204305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.204312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.217360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.217376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.217383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.232058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.232075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.232081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.243776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.243792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.243798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.256881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.256897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.269284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.269300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.269306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 20113.00 IOPS, 78.57 MiB/s [2024-12-09T10:43:53.435Z] [2024-12-09 11:43:53.282123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.282138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.282148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.296530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.296547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.296554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.307048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.307064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.307071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.319035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.319051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.319057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.332095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.332111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.332117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.344622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.344639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.344645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.356341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.356357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.356364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.369556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.369573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.369579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.382474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.382491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.273 [2024-12-09 11:43:53.382498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.273 [2024-12-09 11:43:53.395188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.273 [2024-12-09 11:43:53.395208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.274 [2024-12-09 11:43:53.395215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.274 [2024-12-09 11:43:53.409165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.274 [2024-12-09 11:43:53.409182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.274 [2024-12-09 11:43:53.409188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.274 [2024-12-09 11:43:53.420239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.274 [2024-12-09 11:43:53.420256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.274 [2024-12-09 11:43:53.420262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.274 [2024-12-09 11:43:53.431509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.274 [2024-12-09 11:43:53.431525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.274 [2024-12-09 11:43:53.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.535 [2024-12-09 11:43:53.444177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.444194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.444200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.456875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.456891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.456897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.471024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.471041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.471048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.484331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.484348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.484354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.497680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.497697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.497704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.509280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.509296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.509302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.523467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.523483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.523490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.533118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.533134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.533141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.546364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.546380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.546386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.560773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.560790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.560796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.570737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.570753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.570760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.584296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.584313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.584320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.596905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.596921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.596927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.610727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.610744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.610753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.623026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.623043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.623049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.635922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.635945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.649926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.649942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.661794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.661817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.672239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.672255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.672262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.536 [2024-12-09 11:43:53.684618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.536 [2024-12-09 11:43:53.684636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.536 [2024-12-09 11:43:53.684642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.698755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.698772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.797 [2024-12-09 11:43:53.698779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.711163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.711181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.797 [2024-12-09 11:43:53.711187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.723972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.723992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.797 [2024-12-09 11:43:53.723999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.736672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.736689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.797 [2024-12-09 11:43:53.736696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.747529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.747546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.797 [2024-12-09 11:43:53.747552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.797 [2024-12-09 11:43:53.761709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.797 [2024-12-09 11:43:53.761725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.761732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.773538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.773556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.773562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.786212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.786230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.786236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.799096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.799112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.799119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.811937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.811954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.811960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.823751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.823768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.834796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.834813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.834819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.848593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.848610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.860934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.860951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.860957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.874655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.874673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.874679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.887158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.887175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.887181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.898032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.898050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.898056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.908895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.908912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.908918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.923549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.923565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.923571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.934220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.934242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.934248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.798 [2024-12-09 11:43:53.947308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:01.798 [2024-12-09 11:43:53.947324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.798 [2024-12-09 11:43:53.947331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:53.959976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:53.959993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:53.960000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:53.973382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:53.973399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:53.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:53.986540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:53.986557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:53.986563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:53.998127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:53.998144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:53.998150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.009142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.009160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.009166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.022059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.022076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.022083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.035518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.035535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.035542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.047802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.047820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.047826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.060469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.060485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.060492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.071687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.071705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.071711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.085713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.085730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.085737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.097395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.097412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.097418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.109061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.109078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.122814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.122831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.122838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.133656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.133673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.133680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.148170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.148187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.148196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.059 [2024-12-09 11:43:54.161756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.059 [2024-12-09 11:43:54.161773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-12-09 11:43:54.161780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.060 [2024-12-09 11:43:54.172616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.060 [2024-12-09 11:43:54.172633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-12-09 11:43:54.172639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.060 [2024-12-09 11:43:54.185891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.060 [2024-12-09 11:43:54.185908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-12-09 11:43:54.185914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.060 [2024-12-09 11:43:54.198952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.060 [2024-12-09 11:43:54.198969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-12-09 11:43:54.198976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.060 [2024-12-09 11:43:54.208464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.060 [2024-12-09 11:43:54.208481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-12-09 11:43:54.208487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 [2024-12-09 11:43:54.223939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.223956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.223963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 [2024-12-09 11:43:54.235911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.235927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 [2024-12-09 11:43:54.247853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.247870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.247876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 [2024-12-09 11:43:54.261108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.261129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.261136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 [2024-12-09 11:43:54.273624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.273648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 20255.50 IOPS, 79.12 MiB/s [2024-12-09T10:43:54.482Z] [2024-12-09 11:43:54.284952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e0680) 00:29:02.320 [2024-12-09 11:43:54.284968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.320 [2024-12-09 11:43:54.284974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.320 00:29:02.320 Latency(us) 00:29:02.320 [2024-12-09T10:43:54.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.320 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:02.320 nvme0n1 : 2.00 20274.32 79.20 0.00 0.00 6307.04 2553.17 17148.59 00:29:02.320 [2024-12-09T10:43:54.482Z] =================================================================================================================== 00:29:02.320 [2024-12-09T10:43:54.482Z] Total : 20274.32 79.20 0.00 0.00 6307.04 2553.17 17148.59 00:29:02.320 { 00:29:02.320 "results": [ 00:29:02.320 { 00:29:02.320 "job": "nvme0n1", 00:29:02.320 "core_mask": "0x2", 00:29:02.320 "workload": "randread", 00:29:02.320 "status": "finished", 00:29:02.320 "queue_depth": 128, 00:29:02.320 "io_size": 4096, 00:29:02.320 "runtime": 2.004457, 00:29:02.320 "iops": 20274.318680819793, 00:29:02.320 "mibps": 79.19655734695232, 00:29:02.320 "io_failed": 0, 00:29:02.320 "io_timeout": 0, 00:29:02.320 "avg_latency_us": 6307.036536660186, 00:29:02.320 "min_latency_us": 2553.173333333333, 00:29:02.320 "max_latency_us": 17148.586666666666 00:29:02.320 } 00:29:02.320 ], 00:29:02.320 "core_count": 1 00:29:02.320 } 00:29:02.320 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:02.321 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:02.321 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:02.321 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:02.321 | .driver_specific 00:29:02.321 | .nvme_error 00:29:02.321 | .status_code 00:29:02.321 | .command_transient_transport_error' 00:29:02.581 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:02.581 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3703292 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3703292 ']' 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3703292 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703292 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703292' 00:29:02.582 killing process with pid 3703292 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3703292 00:29:02.582 Received shutdown signal, test time was about 2.000000 seconds 00:29:02.582 00:29:02.582 Latency(us) 00:29:02.582 [2024-12-09T10:43:54.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.582 [2024-12-09T10:43:54.744Z] =================================================================================================================== 00:29:02.582 [2024-12-09T10:43:54.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3703292 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3703970 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3703970 /var/tmp/bperf.sock 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3703970 ']' 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.582 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.582 [2024-12-09 11:43:54.706056] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:02.582 [2024-12-09 11:43:54.706117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703970 ] 00:29:02.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.582 Zero copy mechanism will not be used. 00:29:02.842 [2024-12-09 11:43:54.791951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.842 [2024-12-09 11:43:54.821429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.413 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.413 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:03.413 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.413 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.673 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.933 nvme0n1 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:03.933 11:43:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.195 Zero copy mechanism will not be used. 00:29:04.195 Running I/O for 2 seconds... 00:29:04.195 [2024-12-09 11:43:56.176571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.176605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.176614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.182837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.182859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.182867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.192454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.192474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.192482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.200942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.200961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.200968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.209287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.209304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.209318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.219246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.219271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.226029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.226047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.226054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.237349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.237367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.237373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.247546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.247565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.247571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.255796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.255821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.265334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.265352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.265359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.274074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.274093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.274100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.284453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.284472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.284478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.291248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.291270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.291277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.300600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.300618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.300625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.311438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.311457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.311463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.322828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.322846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.322853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.334476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.334494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.334500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.344744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.344763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.344769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.195 [2024-12-09 11:43:56.353433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.195 [2024-12-09 11:43:56.353452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.195 [2024-12-09 11:43:56.353458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.362748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.362767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.362773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.374561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.374579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.374585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.382596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.382614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.382620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.389326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.389344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.389350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.398969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.398988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.398994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.408020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.408038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.418237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.418255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.418261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.426606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.426624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.437871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.437889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.437896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.448360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.448378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.448385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.456822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.456841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.456850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.456 [2024-12-09 11:43:56.466083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.456 [2024-12-09 11:43:56.466101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.456 [2024-12-09 11:43:56.466107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.476792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.476811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.476817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.484893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.484911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.484917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.491399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.491417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.491424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.499946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.499965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.499971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.510917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.510935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.510942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.520717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.520736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.520743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.528470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.528489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.528495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.538635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.538657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.538663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.548916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.548934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.548940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.559304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.559323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.559330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.568959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.568977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.579545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.579564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.579570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.588168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.588186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.588192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.599965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.599990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.457 [2024-12-09 11:43:56.612395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.457 [2024-12-09 11:43:56.612413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.457 [2024-12-09 11:43:56.612420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.623430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.623449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.623456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.630941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.630959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.630966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.639918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.639937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.639944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.649951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.649969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.649976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.660288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.660307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.660314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.669698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.669716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.669722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.680661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.680680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.680686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.689069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.689087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.689093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.697777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.707657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.707675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.707685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.716662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.716681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.716687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.726341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.726366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.733906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.733924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.733930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.743229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.743248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.743254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.752786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.752804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.752810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.760522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.760541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.760547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.768740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.768758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.768765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.774684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.774702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.774708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.781770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.781791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.789415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.789433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.797920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.797939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.718 [2024-12-09 11:43:56.797945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.718 [2024-12-09 11:43:56.807381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.718 [2024-12-09 11:43:56.807400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.807406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.817495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.817514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.817520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.828114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.828132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.828139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.837309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.846149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.846168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.846174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.856278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.856296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.856303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.864240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.864258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.864265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.719 [2024-12-09 11:43:56.874939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.719 [2024-12-09 11:43:56.874957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.719 [2024-12-09 11:43:56.874963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.884738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.884756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.884762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.895059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.895077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.904779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.904798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.904804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.916905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.928985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.929004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.929014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.941545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.941563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.949846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.949864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.949874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.960252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.960271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.968909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.968927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.968933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.976820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.976844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.986942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.980 [2024-12-09 11:43:56.986960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.980 [2024-12-09 11:43:56.986966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.980 [2024-12-09 11:43:56.999364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:56.999382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:56.999389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.012242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.012267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.024619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.024638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.024645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.037191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.037210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.037216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.050053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.050072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.050079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.062711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.062730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.062736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.075528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.075547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.087565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.087583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.087589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.100790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.100809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.100815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.111580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.111598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.111604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.124309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.124327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.124333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.981 [2024-12-09 11:43:57.135461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:04.981 [2024-12-09 11:43:57.135479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.981 [2024-12-09 11:43:57.135485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.242 [2024-12-09 11:43:57.147077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.242 [2024-12-09 11:43:57.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.242 [2024-12-09 11:43:57.147105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.242 [2024-12-09 11:43:57.158447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.242 [2024-12-09 11:43:57.158465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.242 [2024-12-09 11:43:57.158472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.242 3121.00 IOPS, 390.12 MiB/s [2024-12-09T10:43:57.405Z] [2024-12-09 11:43:57.169112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.169136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.176702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.176726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.185850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.185868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.185875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.195426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.195444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.195450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.205871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.205889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.205896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.215840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.215858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.215865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.226253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.226271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.226278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.237435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.237456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.237462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.248088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.248107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.248114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.260092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.260110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.260116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.269747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.269765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.269772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.278635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.278654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.278662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.290330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.290348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.290355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.299838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.299856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.299862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.308483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.308501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.308507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.316428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.316446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.316452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.327280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.327298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.327305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.337375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.337393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.337399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.348533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.348551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.358831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.358850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.358856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.368539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.368557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.368563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.379580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.379598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.379604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.390177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.390195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.390201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.243 [2024-12-09 11:43:57.400781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.243 [2024-12-09 11:43:57.400800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.243 [2024-12-09 11:43:57.400806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.412578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.412596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.412606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.422412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.422430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.422437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.433383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.433401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.433407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.442541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.442559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.453068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.453087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.453093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.462387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.462406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.462412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.472933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.472951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.472958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.483652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.483670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.483676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.492616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.492633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.492640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.503706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.503731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.503737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.513686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.513704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.513710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.520691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.520709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.520715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.532267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.532284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.532290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.542228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.542246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.542252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.552998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.505 [2024-12-09 11:43:57.561310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.505 [2024-12-09 11:43:57.561328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.505 [2024-12-09 11:43:57.561334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.571570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.571587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.571593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.581705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.581723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.581729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.592917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.592935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.592941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.605710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.605733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.613192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.613209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.613215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.622237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.622254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.631833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.631850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.631857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.642088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.642106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.642112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.652095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.652112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.652118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.506 [2024-12-09 11:43:57.659418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.506 [2024-12-09 11:43:57.659436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.506 [2024-12-09 11:43:57.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.669553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.669574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.669580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.680437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.680455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.680461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.690387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.690404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.690411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.700664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.700681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.700687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.708689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.708706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.708712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.719811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.719828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.719834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.728768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.728785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.728791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.739190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.739207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.739213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.749776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.749793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.749800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.759243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.759261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.759268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.767924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.767947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.779897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.779914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.779920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.792210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.792227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.792233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.803367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.803384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.803391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.812829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.812846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.812852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.824083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.824101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.824107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.835027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.835044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.835050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.844745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.850957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.850974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.850981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.860713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.860731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.860738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.868704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.868722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.868729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.877216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.877235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.877241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.887215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.887233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.887239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.898261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.898280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.906427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.906445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.906451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.768 [2024-12-09 11:43:57.918816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:05.768 [2024-12-09 11:43:57.918835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.768 [2024-12-09 11:43:57.918841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.030 [2024-12-09 11:43:57.930801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.030 [2024-12-09 11:43:57.930823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.030 [2024-12-09 11:43:57.930829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.030 [2024-12-09 11:43:57.942462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.030 [2024-12-09 11:43:57.942481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.030 [2024-12-09 11:43:57.942487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.030 [2024-12-09 11:43:57.953433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.030 [2024-12-09 11:43:57.953451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.030 [2024-12-09 11:43:57.953457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.030 [2024-12-09 11:43:57.963761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.030 [2024-12-09 11:43:57.963779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.030 [2024-12-09 11:43:57.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.030 [2024-12-09 11:43:57.971724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:57.971742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:57.971748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:57.983666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:57.983684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:57.983690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:57.995123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:57.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:57.995147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.006576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.006601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.018096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.018113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.018119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.028341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.028358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.028365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.041252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.041270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.041276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.053743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.053761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.053768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.065500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.065519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.065525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.076571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.076596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.088424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.088443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.088449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.100254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.100273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.100279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.112140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.112158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.112164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.121925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.121952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.126446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.126464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.126470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.131653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.131671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.131677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.140102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.140120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.140126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.150472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.150490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.150496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.031 [2024-12-09 11:43:58.160902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.160921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.160927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.031 3098.00 IOPS, 387.25 MiB/s [2024-12-09T10:43:58.193Z] [2024-12-09 11:43:58.172036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a61b0) 00:29:06.031 [2024-12-09 11:43:58.172055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.031 [2024-12-09 11:43:58.172061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.031 00:29:06.031 Latency(us) 00:29:06.031 [2024-12-09T10:43:58.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.031 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:06.031 nvme0n1 : 2.01 3097.64 387.20 0.00 0.00 5161.33 621.23 19223.89 00:29:06.031 [2024-12-09T10:43:58.193Z] =================================================================================================================== 00:29:06.031 [2024-12-09T10:43:58.193Z] Total : 3097.64 387.20 0.00 0.00 5161.33 621.23 19223.89 00:29:06.031 { 00:29:06.031 "results": [ 00:29:06.031 { 00:29:06.031 "job": "nvme0n1", 00:29:06.031 "core_mask": "0x2", 00:29:06.031 "workload": "randread", 00:29:06.031 "status": "finished", 00:29:06.031 "queue_depth": 16, 00:29:06.031 "io_size": 131072, 00:29:06.031 "runtime": 2.005398, 00:29:06.031 "iops": 3097.6394710675886, 00:29:06.031 "mibps": 387.2049338834486, 00:29:06.031 "io_failed": 0, 00:29:06.031 "io_timeout": 0, 00:29:06.031 "avg_latency_us": 5161.3317192530585, 00:29:06.031 "min_latency_us": 621.2266666666667, 00:29:06.031 "max_latency_us": 19223.893333333333 00:29:06.031 } 00:29:06.031 ], 00:29:06.031 "core_count": 1 00:29:06.031 } 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:06.292 | .driver_specific 00:29:06.292 | .nvme_error 00:29:06.292 | .status_code 00:29:06.292 | .command_transient_transport_error' 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3703970 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3703970 ']' 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3703970 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703970 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:06.292 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703970' 00:29:06.292 killing process with pid 3703970 00:29:06.293 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3703970 00:29:06.293 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.293 00:29:06.293 Latency(us) 00:29:06.293 [2024-12-09T10:43:58.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.293 [2024-12-09T10:43:58.455Z] =================================================================================================================== 00:29:06.293 [2024-12-09T10:43:58.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.293 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3703970 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3704661 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3704661 /var/tmp/bperf.sock 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3704661 ']' 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.554 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.554 [2024-12-09 11:43:58.590336] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:06.554 [2024-12-09 11:43:58.590390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704661 ] 00:29:06.554 [2024-12-09 11:43:58.676684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.554 [2024-12-09 11:43:58.704065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.496 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.756 nvme0n1 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.756 11:43:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.017 Running I/O for 2 seconds... 00:29:08.017 [2024-12-09 11:43:59.949763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee88f8 00:29:08.017 [2024-12-09 11:43:59.951515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:43:59.951543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:43:59.959357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec840 00:29:08.017 [2024-12-09 11:43:59.960424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:43:59.960447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:43:59.971878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee12d8 00:29:08.017 [2024-12-09 11:43:59.972941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:43:59.972957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:43:59.985408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee12d8 00:29:08.017 [2024-12-09 11:43:59.987125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:43:59.987141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:43:59.995021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef6cc8 00:29:08.017 [2024-12-09 11:43:59.996019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:43:59.996036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.009690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef6cc8 00:29:08.017 [2024-12-09 11:44:00.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.011308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.019289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee1b48 00:29:08.017 [2024-12-09 11:44:00.020206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.020222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.031065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee23b8 00:29:08.017 [2024-12-09 11:44:00.032022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.032038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.043691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee5658 00:29:08.017 [2024-12-09 11:44:00.044699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.044715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.055558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee5658 00:29:08.017 [2024-12-09 11:44:00.056578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.056593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.067425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee5658 00:29:08.017 [2024-12-09 11:44:00.068454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.017 [2024-12-09 11:44:00.068470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.017 [2024-12-09 11:44:00.080794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee5658 00:29:08.018 [2024-12-09 11:44:00.082463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.082478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.091184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee2c28 00:29:08.018 [2024-12-09 11:44:00.092223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.092240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.103106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee3d08 00:29:08.018 [2024-12-09 11:44:00.104118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.104134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.116468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee4de8 00:29:08.018 [2024-12-09 11:44:00.118114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.118130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.127190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efc560 00:29:08.018 [2024-12-09 11:44:00.128313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.128329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.139191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.018 [2024-12-09 11:44:00.140354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.140370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.151042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.018 [2024-12-09 11:44:00.152177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.152193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.162878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.018 [2024-12-09 11:44:00.164034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.018 [2024-12-09 11:44:00.174703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.018 [2024-12-09 11:44:00.175866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.018 [2024-12-09 11:44:00.175882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.186540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.279 [2024-12-09 11:44:00.187701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.187717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.198413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.279 [2024-12-09 11:44:00.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.199582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.210243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.279 [2024-12-09 11:44:00.211383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.211398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.222070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.279 [2024-12-09 11:44:00.223233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.223248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.235387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1430 00:29:08.279 [2024-12-09 11:44:00.237176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.237192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.245672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efa3a0 00:29:08.279 [2024-12-09 11:44:00.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.246852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.257475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efa3a0 00:29:08.279 [2024-12-09 11:44:00.258621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.258637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.269224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0bc0 00:29:08.279 [2024-12-09 11:44:00.270361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.279 [2024-12-09 11:44:00.270380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:08.279 [2024-12-09 11:44:00.280489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efdeb0 00:29:08.280 [2024-12-09 11:44:00.281565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.281581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.293049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efdeb0 00:29:08.280 [2024-12-09 11:44:00.294171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.294187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.306358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efdeb0 00:29:08.280 [2024-12-09 11:44:00.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.308140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.316694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eebb98 00:29:08.280 [2024-12-09 11:44:00.317824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.317840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.328583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeaab8 00:29:08.280 [2024-12-09 11:44:00.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.329729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.340467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef4b08 00:29:08.280 [2024-12-09 11:44:00.341594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.341610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.352343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef3a28 00:29:08.280 [2024-12-09 11:44:00.353438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.353454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.364368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeff18 00:29:08.280 [2024-12-09 11:44:00.365765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.365781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.376992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeee38 00:29:08.280 [2024-12-09 11:44:00.378409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.378425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.388798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.280 [2024-12-09 11:44:00.390267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.390283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.400710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.280 [2024-12-09 11:44:00.402093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.402109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.412537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.280 [2024-12-09 11:44:00.413943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.413959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.424343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.280 [2024-12-09 11:44:00.425737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.425752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.280 [2024-12-09 11:44:00.436157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.280 [2024-12-09 11:44:00.437569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.280 [2024-12-09 11:44:00.437584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.447992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.449426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.459812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.461222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.461238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.471643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.473049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.473065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.483454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.484871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.484887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.495262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.496665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.496680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.507056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.508464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.508479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.518863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.520278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.520293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.530683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.532076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.532092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.542531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.543940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.543956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.554327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.555730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.555747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.542 [2024-12-09 11:44:00.566124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede8a8 00:29:08.542 [2024-12-09 11:44:00.567534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.542 [2024-12-09 11:44:00.567550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.577860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeee38 00:29:08.543 [2024-12-09 11:44:00.579229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.579248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.589627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.591016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.591031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.601424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.602804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.613275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.614660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.614676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.625081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.626466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.626482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.636871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.638253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.638269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.648685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.650068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.650083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.660511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.661897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.672332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.673711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.673727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.684155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.685545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.685561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.543 [2024-12-09 11:44:00.695976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:08.543 [2024-12-09 11:44:00.697365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.543 [2024-12-09 11:44:00.697381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.707754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee01f8 00:29:08.804 [2024-12-09 11:44:00.709132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.709148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.721218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee01f8 00:29:08.804 [2024-12-09 11:44:00.723252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.723268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.731585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee5ec8 00:29:08.804 [2024-12-09 11:44:00.732962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.732978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.743360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.744702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.744718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.755203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.756571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.767032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.768379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.768394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.778891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.780273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.780289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.790745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.792090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.792106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.802575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.803929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.814431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.815787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.804 [2024-12-09 11:44:00.815803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.804 [2024-12-09 11:44:00.826257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.804 [2024-12-09 11:44:00.827619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.827635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.838079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.805 [2024-12-09 11:44:00.839434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.839449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.849900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.805 [2024-12-09 11:44:00.851230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.851246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.861720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf988 00:29:08.805 [2024-12-09 11:44:00.863037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.863054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.873500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf118 00:29:08.805 [2024-12-09 11:44:00.874849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.874865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.885323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf118 00:29:08.805 [2024-12-09 11:44:00.886670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.886689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.897145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf118 00:29:08.805 [2024-12-09 11:44:00.898488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.898504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.908935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0788 00:29:08.805 [2024-12-09 11:44:00.910253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.910269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.920776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0788 00:29:08.805 [2024-12-09 11:44:00.922081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.932561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:08.805 [2024-12-09 11:44:00.933887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.933903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:08.805 21453.00 IOPS, 83.80 MiB/s [2024-12-09T10:44:00.967Z] [2024-12-09 11:44:00.944396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:08.805 [2024-12-09 11:44:00.945735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.945751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:08.805 [2024-12-09 11:44:00.956201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:08.805 [2024-12-09 11:44:00.957526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.805 [2024-12-09 11:44:00.957543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:00.968051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:09.066 [2024-12-09 11:44:00.969376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:00.969392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:00.979870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:09.066 [2024-12-09 11:44:00.981188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:00.981204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:00.991815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efeb58 00:29:09.066 [2024-12-09 11:44:00.993143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:00.993159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.003602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.004937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.015445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.016767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.016782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.027284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.028601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.028617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.039131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.040462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.050960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.052259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.052275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.064264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2d80 00:29:09.066 [2024-12-09 11:44:01.066225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.074946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec408 00:29:09.066 [2024-12-09 11:44:01.076415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.076431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.086915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeaab8 00:29:09.066 [2024-12-09 11:44:01.088390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.098757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeaab8 00:29:09.066 [2024-12-09 11:44:01.100206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.100222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.112096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeaab8 00:29:09.066 [2024-12-09 11:44:01.114199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.114214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.121670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef8e88 00:29:09.066 [2024-12-09 11:44:01.123123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.066 [2024-12-09 11:44:01.123139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:09.066 [2024-12-09 11:44:01.134281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec840 00:29:09.067 [2024-12-09 11:44:01.135735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.135751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.146120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec840 00:29:09.067 [2024-12-09 11:44:01.147565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.147581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.157955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec840 00:29:09.067 [2024-12-09 11:44:01.159419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.159435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.169794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eec840 00:29:09.067 [2024-12-09 11:44:01.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.171242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.181645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eea680 00:29:09.067 [2024-12-09 11:44:01.183089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.183106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.195015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee9168 00:29:09.067 [2024-12-09 11:44:01.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.197129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.205700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf550 00:29:09.067 [2024-12-09 11:44:01.207291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.207307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:09.067 [2024-12-09 11:44:01.215425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef9f68 00:29:09.067 [2024-12-09 11:44:01.216345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.067 [2024-12-09 11:44:01.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.227249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eee190 00:29:09.328 [2024-12-09 11:44:01.228202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.228218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.239119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eee190 00:29:09.328 [2024-12-09 11:44:01.240038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.240054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.250946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eee190 00:29:09.328 [2024-12-09 11:44:01.251897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.251913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.264268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eee190 00:29:09.328 [2024-12-09 11:44:01.265867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.265883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.273824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee73e0 00:29:09.328 [2024-12-09 11:44:01.274947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.274963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.288710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efdeb0 00:29:09.328 [2024-12-09 11:44:01.290474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.290490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.299405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede038 00:29:09.328 [2024-12-09 11:44:01.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.300695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.312937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee8088 00:29:09.328 [2024-12-09 11:44:01.314871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.328 [2024-12-09 11:44:01.314887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:09.328 [2024-12-09 11:44:01.323629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeaef0 00:29:09.328 [2024-12-09 11:44:01.325070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.325085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.335595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6b70 00:29:09.329 [2024-12-09 11:44:01.337030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.337046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.347430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6b70 00:29:09.329 [2024-12-09 11:44:01.348869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.359252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6b70 00:29:09.329 [2024-12-09 11:44:01.360689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.360705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.371030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1868 00:29:09.329 [2024-12-09 11:44:01.372420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.372436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.382880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1868 00:29:09.329 [2024-12-09 11:44:01.384308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.384325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.394710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1868 00:29:09.329 [2024-12-09 11:44:01.396137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.396153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.406545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef1868 00:29:09.329 [2024-12-09 11:44:01.407976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.407992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.417600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eed920 00:29:09.329 [2024-12-09 11:44:01.419013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.419028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.428297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edfdc0 00:29:09.329 [2024-12-09 11:44:01.429177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.429193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.439735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eee190 00:29:09.329 [2024-12-09 11:44:01.440651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.440666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.452732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6fa8 00:29:09.329 [2024-12-09 11:44:01.453827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.453843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.464565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efb480 00:29:09.329 [2024-12-09 11:44:01.465650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.465665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.477875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efb480 00:29:09.329 [2024-12-09 11:44:01.479614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.329 [2024-12-09 11:44:01.487457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016edf550 00:29:09.329 [2024-12-09 11:44:01.488526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.329 [2024-12-09 11:44:01.488541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.500116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efc998 00:29:09.590 [2024-12-09 11:44:01.501198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.501217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.511953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efe2e8 00:29:09.590 [2024-12-09 11:44:01.513024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.523809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef8618 00:29:09.590 [2024-12-09 11:44:01.524875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.524890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.537189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eecc78 00:29:09.590 [2024-12-09 11:44:01.538923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.538939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.547559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee6300 00:29:09.590 [2024-12-09 11:44:01.548662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.548678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.558635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee23b8 00:29:09.590 [2024-12-09 11:44:01.559699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.590 [2024-12-09 11:44:01.559714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:09.590 [2024-12-09 11:44:01.571303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee3498 00:29:09.590 [2024-12-09 11:44:01.572368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.572384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.583150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee4578 00:29:09.591 [2024-12-09 11:44:01.584183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.584199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.594142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee4de8 00:29:09.591 [2024-12-09 11:44:01.595167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.595182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.606711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ee4de8 00:29:09.591 [2024-12-09 11:44:01.607778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.607793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.618554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efc560 00:29:09.591 [2024-12-09 11:44:01.619631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.619647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.630432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef2510 00:29:09.591 [2024-12-09 11:44:01.631499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.642250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eea680 00:29:09.591 [2024-12-09 11:44:01.643308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.643324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.654081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eea680 00:29:09.591 [2024-12-09 11:44:01.655131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.655146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.665880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eea680 00:29:09.591 [2024-12-09 11:44:01.666936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.666952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.677690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0ff8 00:29:09.591 [2024-12-09 11:44:01.678747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.678763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.689532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0ff8 00:29:09.591 [2024-12-09 11:44:01.690584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.690600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.701356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0ff8 00:29:09.591 [2024-12-09 11:44:01.702402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.702417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.713168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef0ff8 00:29:09.591 [2024-12-09 11:44:01.714176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.714192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.724927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeea00 00:29:09.591 [2024-12-09 11:44:01.725961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.725976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.736740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeea00 00:29:09.591 [2024-12-09 11:44:01.737776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.737792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:09.591 [2024-12-09 11:44:01.748568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeea00 00:29:09.591 [2024-12-09 11:44:01.749609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.591 [2024-12-09 11:44:01.749624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.760392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeea00 00:29:09.853 [2024-12-09 11:44:01.761438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.761454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.773701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016eeea00 00:29:09.853 [2024-12-09 11:44:01.775382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.775398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.784049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efda78 00:29:09.853 [2024-12-09 11:44:01.785078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.785094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.795899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ede470 00:29:09.853 [2024-12-09 11:44:01.796933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.796949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.807013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016ef4b08 00:29:09.853 [2024-12-09 11:44:01.808023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.808041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.820502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.820813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.820829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.832674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.832991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.833007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.844846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.845197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.845214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.857041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.857336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.857352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.869197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.869491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.869507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.881355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.893537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.893836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.893852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.905703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.906008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.906028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.917860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.918172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.918187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 [2024-12-09 11:44:01.930042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.930384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 21488.50 IOPS, 83.94 MiB/s [2024-12-09T10:44:02.015Z] [2024-12-09 11:44:01.942205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f79d0) with pdu=0x200016efef90 00:29:09.853 [2024-12-09 11:44:01.942384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.853 [2024-12-09 11:44:01.942398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:09.853 00:29:09.853 Latency(us) 00:29:09.853 [2024-12-09T10:44:02.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.853 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.853 nvme0n1 : 2.01 21486.61 83.93 0.00 0.00 5945.86 2116.27 14199.47 00:29:09.853 [2024-12-09T10:44:02.015Z] =================================================================================================================== 00:29:09.853 [2024-12-09T10:44:02.015Z] Total : 21486.61 83.93 0.00 0.00 5945.86 2116.27 14199.47 00:29:09.853 { 00:29:09.853 "results": [ 00:29:09.853 { 00:29:09.853 "job": "nvme0n1", 00:29:09.853 "core_mask": "0x2", 00:29:09.853 "workload": "randwrite", 00:29:09.853 "status": "finished", 00:29:09.853 "queue_depth": 128, 00:29:09.853 "io_size": 4096, 00:29:09.853 "runtime": 2.005761, 00:29:09.853 "iops": 21486.607826156756, 00:29:09.853 "mibps": 83.93206182092483, 00:29:09.853 "io_failed": 0, 00:29:09.853 "io_timeout": 0, 00:29:09.853 "avg_latency_us": 5945.859076965914, 00:29:09.854 "min_latency_us": 2116.266666666667, 00:29:09.854 "max_latency_us": 14199.466666666667 00:29:09.854 } 00:29:09.854 ], 00:29:09.854 "core_count": 1 00:29:09.854 } 00:29:09.854 11:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.854 11:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.854 11:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.854 | .driver_specific 00:29:09.854 | .nvme_error 00:29:09.854 | .status_code 00:29:09.854 | .command_transient_transport_error' 00:29:09.854 11:44:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3704661 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3704661 ']' 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3704661 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704661 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704661' 00:29:10.114 killing process with pid 3704661 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3704661 00:29:10.114 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.114 00:29:10.114 Latency(us) 00:29:10.114 [2024-12-09T10:44:02.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.114 [2024-12-09T10:44:02.276Z] =================================================================================================================== 00:29:10.114 [2024-12-09T10:44:02.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.114 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3704661 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3705343 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3705343 /var/tmp/bperf.sock 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3705343 ']' 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:10.374 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.375 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.375 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.375 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.375 11:44:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.375 [2024-12-09 11:44:02.365400] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:10.375 [2024-12-09 11:44:02.365459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705343 ] 00:29:10.375 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.375 Zero copy mechanism will not be used. 00:29:10.375 [2024-12-09 11:44:02.450449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.375 [2024-12-09 11:44:02.480218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.319 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.579 nvme0n1 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.580 11:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.840 Zero copy mechanism will not be used. 00:29:11.840 Running I/O for 2 seconds... 00:29:11.840 [2024-12-09 11:44:03.833006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.833262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.833288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.843907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.844045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.844063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.855344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.855693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.855709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.867325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.867415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.867430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.878975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.879292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.879320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.890360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.900032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.900430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.900445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.908888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.909217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.840 [2024-12-09 11:44:03.909233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.840 [2024-12-09 11:44:03.915910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.840 [2024-12-09 11:44:03.915985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.915999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.920316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.920420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.920435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.924764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.924843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.924858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.930417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.930511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.930526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.935784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.935857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.935872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.941968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.942057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.942075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.949045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.949133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.949148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.955398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.955501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.955516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.962168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.962437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.962452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.966625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.966682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.966697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.972746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.972813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.972828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.978111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.978166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.978181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.984778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.984845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.989559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.989654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.989669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:11.841 [2024-12-09 11:44:03.997631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:11.841 [2024-12-09 11:44:03.997888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.841 [2024-12-09 11:44:03.997903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.003712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.004017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.004033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.012577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.012662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.017144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.017232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.017247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.023969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.024049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.024064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.028801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.028899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.028915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.034192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.034264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.034279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.039593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.039667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.039682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.043899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.043965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.043981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.049360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.049440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.049455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.057071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.057145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.057160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.063997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.064085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.070664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.070727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.070742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.077408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.077498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.084574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.084666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.104 [2024-12-09 11:44:04.084681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.104 [2024-12-09 11:44:04.088763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.104 [2024-12-09 11:44:04.088831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.088847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.092999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.093099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.093114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.097801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.097907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.097925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.103675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.103767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.107853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.107939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.112113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.112194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.112210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.119302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.119368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.119383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.124910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.124987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.125002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.130743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.130810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.130825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.135417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.135491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.135506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.139400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.139469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.139484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.143360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.143447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.143463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.147765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.147870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.147885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.155621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.155934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.155949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.162967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.163074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.163090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.169270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.169343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.169358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.176543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.176650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.176665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.185524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.185628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.185643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.194102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.194206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.194221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.200712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.200805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.200821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.207732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.207842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.207858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.214718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.214777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.214791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.221445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.221734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.221749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.230950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.231028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.231043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.239578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.239868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.239882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.249911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.250129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.105 [2024-12-09 11:44:04.261219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.105 [2024-12-09 11:44:04.261541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.105 [2024-12-09 11:44:04.261557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.273972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.274102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.274117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.285092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.285386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.296301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.296375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.296390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.306344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.306416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.306431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.314952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.315021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.315036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.323667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.323736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.323751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.332881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.332960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.332975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.341108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.341178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.341193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.348850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.348938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.348953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.357358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.357426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.365972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.366050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.368 [2024-12-09 11:44:04.366065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.368 [2024-12-09 11:44:04.374483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.368 [2024-12-09 11:44:04.374577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.380516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.380579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.380594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.388281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.388360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.388374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.396598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.396666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.396681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.404294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.404598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.404613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.411652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.411718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.411733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.419386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.419483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.419498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.425604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.425702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.425717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.431192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.431289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.431304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.437049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.437252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.437267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.444064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.444172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.444187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.450104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.450303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.450318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.457153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.457262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.457277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.462710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.462775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.462790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.469459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.469561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.469577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.475577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.475679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.475694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.481984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.482097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.482115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.490243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.490317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.490333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.495517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.495634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.502278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.502533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.502548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.510746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.510815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.510830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.517627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.517748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.369 [2024-12-09 11:44:04.524527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.369 [2024-12-09 11:44:04.524835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.369 [2024-12-09 11:44:04.524850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.530573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.530781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.530796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.535100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.535172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.535187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.540476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.540733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.540748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.547343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.547418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.547433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.555825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.556060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.556075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.562093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.562380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.562395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.571606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.571939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.571954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.579569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.579652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.579667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.585338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.585428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.585442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.592863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.592970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.598511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.598592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.598608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.604969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.605050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.605065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.612350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.612426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.612441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.617981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.618073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.618089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.623933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.624003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.624024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.631878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.631943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.631958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.638287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.638573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.638588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.645481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.645704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.645718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.651896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.651979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.651994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.657801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.657883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.657900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.663320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.663577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.663592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.632 [2024-12-09 11:44:04.670224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.632 [2024-12-09 11:44:04.670299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.632 [2024-12-09 11:44:04.670314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.679436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.679635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.679651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.688176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.688458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.688473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.694865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.694980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.694995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.702622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.702683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.702698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.709195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.709264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.718248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.718310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.718325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.726363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.726636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.726652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.731976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.732045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.732061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.740651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.740717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.740732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.748898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.748965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.748981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.754614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.754703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.761325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.761397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.761412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.770437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.770633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.770648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.778865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.778986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.779002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.633 [2024-12-09 11:44:04.788003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.633 [2024-12-09 11:44:04.788074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.633 [2024-12-09 11:44:04.788089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.795692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.795765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.795781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.803962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.804036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.811636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.811758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.811773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.896 4289.00 IOPS, 536.12 MiB/s [2024-12-09T10:44:05.058Z] [2024-12-09 11:44:04.820509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.820575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.820590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.828935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.829001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.829022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.834974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.835040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.835055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.841810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.841877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.841892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.848356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.848491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.848506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.854562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.854631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.861074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.861186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.861202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.868426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.868518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.868533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.875243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.875327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.875342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.881449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.881515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.881530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.887199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.887274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.887290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.892626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.892692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.892707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.900936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.900998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.901018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.905811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.905883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.910408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.910640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.910655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.896 [2024-12-09 11:44:04.918408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.896 [2024-12-09 11:44:04.918476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.896 [2024-12-09 11:44:04.918491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.924996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.925264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.925279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.933117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.933184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.933199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.939609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.939708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.948603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.948666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.948681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.954894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.954956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.954971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.961180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.961238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.961254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.968396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.968518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.976350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.976415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.976431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.984695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.984836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.984851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:04.993819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:04.993884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:04.993900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.000978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.001079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.001094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.007318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.007383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.007398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.012079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.012192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.012207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.019435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.019502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.019517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.027724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.027790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.027805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.032568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.032644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.032662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.037499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.037609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.037625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.042809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.042917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.047968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.048080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.048095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:12.897 [2024-12-09 11:44:05.053828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:12.897 [2024-12-09 11:44:05.053928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.897 [2024-12-09 11:44:05.053942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.060722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.060790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.060805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.065964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.066073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.066089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.074463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.074757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.074772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.080700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.080798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.080813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.088707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.088777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.095892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.095954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.095969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.103235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.103301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.103317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.110420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.110555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.110570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.119112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.119197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.119213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.125432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.159 [2024-12-09 11:44:05.125499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.159 [2024-12-09 11:44:05.125514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.159 [2024-12-09 11:44:05.130471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.130543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.130558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.135067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.135174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.139022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.139136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.139151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.145958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.146066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.146082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.150892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.150964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.150979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.156123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.156190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.156205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.164355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.164466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.171932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.172003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.178941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.179065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.186629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.186699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.186713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.197440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.197502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.197517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.205302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.205564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.205582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.213632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.213700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.213715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.223111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.223187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.223202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.230992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.231072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.239329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.239444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.247124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.247191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.247206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.256131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.256200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.256215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.262769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.262861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.262877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.269713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.269782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.269797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.276846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.276915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.276930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.285584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.285651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.285666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.291681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.291749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.291763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.160 [2024-12-09 11:44:05.298730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.160 [2024-12-09 11:44:05.298829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.160 [2024-12-09 11:44:05.298844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.161 [2024-12-09 11:44:05.304805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.161 [2024-12-09 11:44:05.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-12-09 11:44:05.304955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.161 [2024-12-09 11:44:05.311410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.161 [2024-12-09 11:44:05.311480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-12-09 11:44:05.311495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.161 [2024-12-09 11:44:05.318720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.161 [2024-12-09 11:44:05.318819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.161 [2024-12-09 11:44:05.318834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.324702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.324769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.324784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.331726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.331792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.331808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.338634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.338695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.338710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.345571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.345640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.345655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.353232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.353308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.353324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.422 [2024-12-09 11:44:05.359776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.422 [2024-12-09 11:44:05.359842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.422 [2024-12-09 11:44:05.359857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.367026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.367094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.367109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.373985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.374053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.374068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.382447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.382519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.382535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.389911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.390026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.390041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.398579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.398646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.398665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.404701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.404768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.411986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.412057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.412073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.418958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.419027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.419042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.425381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.425444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.433485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.433556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.433571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.440323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.440395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.440410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.448417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.448510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.448525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.457217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.457284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.457300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.464310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.464388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.464404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.473088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.473161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.473176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.480387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.480455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.480471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.490207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.490273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.490288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.500325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.500395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.500411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.506363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.506431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.506445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.515789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.515870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.524508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.524578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.524593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.536670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.536915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.536930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.548642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.548709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.548724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.560576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.560768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.560783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.423 [2024-12-09 11:44:05.572714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.423 [2024-12-09 11:44:05.572787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.423 [2024-12-09 11:44:05.572802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.584552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.584812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.584827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.595758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.595820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.595835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.606840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.606903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.606918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.617994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.618067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.618082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.630069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.630159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.641980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.642316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.642335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.654240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.654539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.654554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.665880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.665945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.665960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.677962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.678150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.689476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.689550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.689565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.701588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.701874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.701889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.713776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.713874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.713890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.725914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.726270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.726285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.737941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.738016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.738031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.685 [2024-12-09 11:44:05.749947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.685 [2024-12-09 11:44:05.750124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.685 [2024-12-09 11:44:05.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.686 [2024-12-09 11:44:05.762667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.762916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.762931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.686 [2024-12-09 11:44:05.775153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.775264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.775280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.686 [2024-12-09 11:44:05.787420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.787679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:13.686 [2024-12-09 11:44:05.799865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.799959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.799974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:13.686 [2024-12-09 11:44:05.812178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.812290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.812306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:13.686 4064.00 IOPS, 508.00 MiB/s [2024-12-09T10:44:05.848Z] [2024-12-09 11:44:05.824328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14f7d10) with pdu=0x200016efef90 00:29:13.686 [2024-12-09 11:44:05.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.686 [2024-12-09 11:44:05.824412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:13.686 00:29:13.686 Latency(us) 00:29:13.686 [2024-12-09T10:44:05.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.686 nvme0n1 : 2.01 4061.88 507.74 0.00 0.00 3932.71 1856.85 16930.13 00:29:13.686 [2024-12-09T10:44:05.848Z] =================================================================================================================== 00:29:13.686 [2024-12-09T10:44:05.848Z] Total : 4061.88 507.74 0.00 0.00 3932.71 1856.85 16930.13 00:29:13.686 { 00:29:13.686 "results": [ 00:29:13.686 { 00:29:13.686 "job": "nvme0n1", 00:29:13.686 "core_mask": "0x2", 00:29:13.686 "workload": "randwrite", 00:29:13.686 "status": "finished", 00:29:13.686 "queue_depth": 16, 00:29:13.686 "io_size": 131072, 00:29:13.686 "runtime": 2.005721, 00:29:13.686 "iops": 4061.880989429736, 00:29:13.686 "mibps": 507.735123678717, 00:29:13.686 "io_failed": 0, 00:29:13.686 "io_timeout": 0, 00:29:13.686 "avg_latency_us": 3932.7113620555624, 00:29:13.686 "min_latency_us": 1856.8533333333332, 00:29:13.686 "max_latency_us": 16930.133333333335 00:29:13.686 } 00:29:13.686 ], 00:29:13.686 "core_count": 1 00:29:13.686 } 00:29:13.947 11:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.947 11:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.947 11:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.947 | .driver_specific 00:29:13.947 | .nvme_error 00:29:13.947 | .status_code 00:29:13.947 | .command_transient_transport_error' 00:29:13.947 11:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.947 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 263 > 0 )) 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3705343 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3705343 ']' 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3705343 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3705343 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3705343' 00:29:13.948 killing process with pid 3705343 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3705343 00:29:13.948 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.948 00:29:13.948 Latency(us) 00:29:13.948 [2024-12-09T10:44:06.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.948 [2024-12-09T10:44:06.110Z] =================================================================================================================== 00:29:13.948 [2024-12-09T10:44:06.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.948 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3705343 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3702945 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3702945 ']' 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3702945 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.208 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702945 00:29:14.209 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.209 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.209 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702945' 00:29:14.209 killing process with pid 3702945 00:29:14.209 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3702945 00:29:14.209 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3702945 00:29:14.470 00:29:14.470 real 0m16.469s 00:29:14.470 user 0m32.612s 00:29:14.470 sys 0m3.476s 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.470 ************************************ 00:29:14.470 END TEST nvmf_digest_error 00:29:14.470 ************************************ 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.470 rmmod nvme_tcp 00:29:14.470 rmmod nvme_fabrics 00:29:14.470 rmmod nvme_keyring 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3702945 ']' 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3702945 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3702945 ']' 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3702945 00:29:14.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3702945) - No such process 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3702945 is not found' 00:29:14.470 Process with pid 3702945 is not found 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.470 11:44:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.014 00:29:17.014 real 0m43.035s 00:29:17.014 user 1m7.856s 00:29:17.014 sys 0m12.645s 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:17.014 ************************************ 00:29:17.014 END TEST nvmf_digest 00:29:17.014 ************************************ 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:17.014 11:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.015 ************************************ 00:29:17.015 START TEST nvmf_bdevperf 00:29:17.015 ************************************ 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:17.015 * Looking for test storage... 00:29:17.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.015 --rc genhtml_branch_coverage=1 00:29:17.015 --rc genhtml_function_coverage=1 00:29:17.015 --rc genhtml_legend=1 00:29:17.015 --rc geninfo_all_blocks=1 00:29:17.015 --rc geninfo_unexecuted_blocks=1 00:29:17.015 00:29:17.015 ' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.015 --rc genhtml_branch_coverage=1 00:29:17.015 --rc genhtml_function_coverage=1 00:29:17.015 --rc genhtml_legend=1 00:29:17.015 --rc geninfo_all_blocks=1 00:29:17.015 --rc geninfo_unexecuted_blocks=1 00:29:17.015 00:29:17.015 ' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.015 --rc genhtml_branch_coverage=1 00:29:17.015 --rc genhtml_function_coverage=1 00:29:17.015 --rc genhtml_legend=1 00:29:17.015 --rc geninfo_all_blocks=1 00:29:17.015 --rc geninfo_unexecuted_blocks=1 00:29:17.015 00:29:17.015 ' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.015 --rc genhtml_branch_coverage=1 00:29:17.015 --rc genhtml_function_coverage=1 00:29:17.015 --rc genhtml_legend=1 00:29:17.015 --rc geninfo_all_blocks=1 00:29:17.015 --rc geninfo_unexecuted_blocks=1 00:29:17.015 00:29:17.015 ' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.015 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.016 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:25.155 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:25.155 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:25.155 Found net devices under 0000:31:00.0: cvl_0_0 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:25.155 Found net devices under 0000:31:00.1: cvl_0_1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.155 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:29:25.156 00:29:25.156 --- 10.0.0.2 ping statistics --- 00:29:25.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.156 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:29:25.156 00:29:25.156 --- 10.0.0.1 ping statistics --- 00:29:25.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.156 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3710426 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3710426 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3710426 ']' 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 [2024-12-09 11:44:16.477347] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:25.156 [2024-12-09 11:44:16.477396] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.156 [2024-12-09 11:44:16.572149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.156 [2024-12-09 11:44:16.607766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.156 [2024-12-09 11:44:16.607799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.156 [2024-12-09 11:44:16.607807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.156 [2024-12-09 11:44:16.607813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.156 [2024-12-09 11:44:16.607819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.156 [2024-12-09 11:44:16.609118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.156 [2024-12-09 11:44:16.609170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.156 [2024-12-09 11:44:16.609171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 [2024-12-09 11:44:16.733529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 Malloc0 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.156 [2024-12-09 11:44:16.803157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.156 { 00:29:25.156 "params": { 00:29:25.156 "name": "Nvme$subsystem", 00:29:25.156 "trtype": "$TEST_TRANSPORT", 00:29:25.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.156 "adrfam": "ipv4", 00:29:25.156 "trsvcid": "$NVMF_PORT", 00:29:25.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.156 "hdgst": ${hdgst:-false}, 00:29:25.156 "ddgst": ${ddgst:-false} 00:29:25.156 }, 00:29:25.156 "method": "bdev_nvme_attach_controller" 00:29:25.156 } 00:29:25.156 EOF 00:29:25.156 )") 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:25.156 11:44:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.156 "params": { 00:29:25.156 "name": "Nvme1", 00:29:25.156 "trtype": "tcp", 00:29:25.156 "traddr": "10.0.0.2", 00:29:25.156 "adrfam": "ipv4", 00:29:25.156 "trsvcid": "4420", 00:29:25.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.156 "hdgst": false, 00:29:25.156 "ddgst": false 00:29:25.156 }, 00:29:25.156 "method": "bdev_nvme_attach_controller" 00:29:25.156 }' 00:29:25.156 [2024-12-09 11:44:16.856186] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:25.156 [2024-12-09 11:44:16.856238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710483 ] 00:29:25.156 [2024-12-09 11:44:16.927706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.156 [2024-12-09 11:44:16.964148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.156 Running I/O for 1 seconds... 00:29:26.098 8856.00 IOPS, 34.59 MiB/s 00:29:26.098 Latency(us) 00:29:26.098 [2024-12-09T10:44:18.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.098 Verification LBA range: start 0x0 length 0x4000 00:29:26.098 Nvme1n1 : 1.02 8939.24 34.92 0.00 0.00 14255.32 3085.65 17476.27 00:29:26.098 [2024-12-09T10:44:18.260Z] =================================================================================================================== 00:29:26.098 [2024-12-09T10:44:18.260Z] Total : 8939.24 34.92 0.00 0.00 14255.32 3085.65 17476.27 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3710793 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.098 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.098 { 00:29:26.098 "params": { 00:29:26.098 "name": "Nvme$subsystem", 00:29:26.098 "trtype": "$TEST_TRANSPORT", 00:29:26.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.098 "adrfam": "ipv4", 00:29:26.098 "trsvcid": "$NVMF_PORT", 00:29:26.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.098 "hdgst": ${hdgst:-false}, 00:29:26.098 "ddgst": ${ddgst:-false} 00:29:26.098 }, 00:29:26.098 "method": "bdev_nvme_attach_controller" 00:29:26.098 } 00:29:26.098 EOF 00:29:26.098 )") 00:29:26.358 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:26.358 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:26.358 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:26.358 11:44:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme1", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 }' 00:29:26.358 [2024-12-09 11:44:18.304224] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:26.358 [2024-12-09 11:44:18.304276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710793 ] 00:29:26.358 [2024-12-09 11:44:18.376237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.358 [2024-12-09 11:44:18.410791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.619 Running I/O for 15 seconds... 00:29:28.947 10881.00 IOPS, 42.50 MiB/s [2024-12-09T10:44:21.373Z] 11042.50 IOPS, 43.13 MiB/s [2024-12-09T10:44:21.373Z] 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3710426 00:29:29.211 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:29.211 [2024-12-09 11:44:21.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.211 [2024-12-09 11:44:21.268965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.211 [2024-12-09 11:44:21.268975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.268985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.268995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.212 [2024-12-09 11:44:21.269742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.212 [2024-12-09 11:44:21.269752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.269988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.269996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.213 [2024-12-09 11:44:21.270391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.213 [2024-12-09 11:44:21.270434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.213 [2024-12-09 11:44:21.270441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.214 [2024-12-09 11:44:21.270458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.214 [2024-12-09 11:44:21.270476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.214 [2024-12-09 11:44:21.270494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.214 [2024-12-09 11:44:21.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.214 [2024-12-09 11:44:21.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.270988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.270997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.271005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.271017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.271025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.271034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.214 [2024-12-09 11:44:21.271041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.271050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306660 is same with the state(6) to be set 00:29:29.214 [2024-12-09 11:44:21.271060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:29.214 [2024-12-09 11:44:21.271066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:29.214 [2024-12-09 11:44:21.271073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:29:29.214 [2024-12-09 11:44:21.271081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.214 [2024-12-09 11:44:21.271160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.215 [2024-12-09 11:44:21.271171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-12-09 11:44:21.271180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.215 [2024-12-09 11:44:21.271187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-12-09 11:44:21.271196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.215 [2024-12-09 11:44:21.271206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-12-09 11:44:21.271215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.215 [2024-12-09 11:44:21.271222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.215 [2024-12-09 11:44:21.271229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.275037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.275065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.275818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.275835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.275844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.276070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.276290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.276299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.276308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.276318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.289192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.289840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.289879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.289890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.290138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.290362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.290371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.290380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.290388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.303069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.303631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.303669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.303680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.303918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.304153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.304164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.304173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.304181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.316861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.317549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.317588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.317599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.317837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.318067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.318076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.318084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.318093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.330759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.331424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.331462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.331474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.331712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.331935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.331944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.331951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.331960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.344629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.345322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.345361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.345372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.345610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.345832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.345841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.345853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.345861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.215 [2024-12-09 11:44:21.358539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.215 [2024-12-09 11:44:21.359150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.215 [2024-12-09 11:44:21.359187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.215 [2024-12-09 11:44:21.359198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.215 [2024-12-09 11:44:21.359435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.215 [2024-12-09 11:44:21.359657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.215 [2024-12-09 11:44:21.359666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.215 [2024-12-09 11:44:21.359674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.215 [2024-12-09 11:44:21.359682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.372359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.372868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.372906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.372917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.373164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.373386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.373396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.373404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.373412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.386293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.386968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.387005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.387025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.387264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.387486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.387495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.387502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.387510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.400188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.400873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.400911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.400922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.401169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.401392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.401401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.401409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.401417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.414129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.414821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.414859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.414870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.415116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.415339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.415348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.415356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.415364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.428031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.428703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.428741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.428752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.428990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.429221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.429232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.429240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.429247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.441914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.442454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.442491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.442506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.442744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.442966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.442975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.442983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.442991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.455884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.456535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.456585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.456822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.457053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.457063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.457071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.478 [2024-12-09 11:44:21.457079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.478 [2024-12-09 11:44:21.469739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.478 [2024-12-09 11:44:21.470407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-12-09 11:44:21.470445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.478 [2024-12-09 11:44:21.470456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.478 [2024-12-09 11:44:21.470694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.478 [2024-12-09 11:44:21.470916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.478 [2024-12-09 11:44:21.470925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.478 [2024-12-09 11:44:21.470933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.470941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.483617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.484314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.484352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.484363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.484601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.484827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.484837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.484845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.484852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.497539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.498147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.498185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.498195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.498433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.498655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.498664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.498672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.498680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.511371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.512048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.512087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.512100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.512341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.512563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.512572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.512580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.512588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.525262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.525917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.525954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.525965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.526212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.526435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.526444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.526457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.526465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.539141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.539705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.539724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.539732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.539951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.540176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.540185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.540193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.540200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.553077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.553707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.553745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.553756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.553994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.554224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.554234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.554242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.554251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.566916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.567570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.567608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.567619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.567856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.568087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.568096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.568105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.568113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.580779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.581433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.581471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.581483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.581720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.581943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.581952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.581960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.581968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.479 [2024-12-09 11:44:21.594653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.479 [2024-12-09 11:44:21.595329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-12-09 11:44:21.595368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.479 [2024-12-09 11:44:21.595379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.479 [2024-12-09 11:44:21.595617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.479 [2024-12-09 11:44:21.595839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.479 [2024-12-09 11:44:21.595848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.479 [2024-12-09 11:44:21.595855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.479 [2024-12-09 11:44:21.595863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.480 [2024-12-09 11:44:21.608534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.480 [2024-12-09 11:44:21.609143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-12-09 11:44:21.609181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.480 [2024-12-09 11:44:21.609193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.480 [2024-12-09 11:44:21.609435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.480 [2024-12-09 11:44:21.609657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.480 [2024-12-09 11:44:21.609666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.480 [2024-12-09 11:44:21.609674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.480 [2024-12-09 11:44:21.609681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.480 [2024-12-09 11:44:21.622392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.480 [2024-12-09 11:44:21.623036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-12-09 11:44:21.623074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.480 [2024-12-09 11:44:21.623091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.480 [2024-12-09 11:44:21.623330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.480 [2024-12-09 11:44:21.623553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.480 [2024-12-09 11:44:21.623562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.480 [2024-12-09 11:44:21.623570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.480 [2024-12-09 11:44:21.623578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.480 [2024-12-09 11:44:21.636254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.480 [2024-12-09 11:44:21.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-12-09 11:44:21.636974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.480 [2024-12-09 11:44:21.636986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.742 [2024-12-09 11:44:21.637237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.742 [2024-12-09 11:44:21.637460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.742 [2024-12-09 11:44:21.637471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.742 [2024-12-09 11:44:21.637480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.742 [2024-12-09 11:44:21.637488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.742 [2024-12-09 11:44:21.650157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.742 [2024-12-09 11:44:21.650837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-12-09 11:44:21.650875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.742 [2024-12-09 11:44:21.650887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.742 [2024-12-09 11:44:21.651133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.742 [2024-12-09 11:44:21.651357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.742 [2024-12-09 11:44:21.651366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.742 [2024-12-09 11:44:21.651374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.742 [2024-12-09 11:44:21.651382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.742 [2024-12-09 11:44:21.664055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.742 [2024-12-09 11:44:21.664678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-12-09 11:44:21.664715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.742 [2024-12-09 11:44:21.664726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.742 [2024-12-09 11:44:21.664964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.742 [2024-12-09 11:44:21.665199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.742 [2024-12-09 11:44:21.665209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.742 [2024-12-09 11:44:21.665217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.742 [2024-12-09 11:44:21.665225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.677889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.678550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.678588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.678600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.678838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.679070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.679079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.679087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.679095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.691759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.692443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.692480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.692491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.692729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.692961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.692971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.692979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.692987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 9521.00 IOPS, 37.19 MiB/s [2024-12-09T10:44:21.905Z] [2024-12-09 11:44:21.705800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.706457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.706495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.706506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.706744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.706966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.706975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.706988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.706996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.719681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.720276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.720296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.720305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.720523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.720742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.720750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.720757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.720764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.733632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.734169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.734187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.734194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.734412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.734630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.734638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.734645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.734652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.747516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.748091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.748130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.748142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.748381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.748603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.748612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.748620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.748628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.761307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.762067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.762105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.762116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.762354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.762577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.762586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.762594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.762602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.775278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.775817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.775856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.775868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.776114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.776338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.776347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.776355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.776363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.789247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.789797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.789818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.789826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.790050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.790269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.790279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.790285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.790293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.803181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.743 [2024-12-09 11:44:21.803843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-12-09 11:44:21.803881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.743 [2024-12-09 11:44:21.803897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.743 [2024-12-09 11:44:21.804142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.743 [2024-12-09 11:44:21.804366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.743 [2024-12-09 11:44:21.804375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.743 [2024-12-09 11:44:21.804383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.743 [2024-12-09 11:44:21.804390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.743 [2024-12-09 11:44:21.817066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.817743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.817781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.817792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.818037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.818260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.818270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.818278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.818286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.830979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.831483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.831503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.831511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.831730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.831947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.831956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.831963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.831970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.844842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.845365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.845403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.845414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.845652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.845883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.845892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.845900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.845908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.858792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.859501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.859512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.859751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.859973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.859982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.859990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.859997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.872672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.873316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.873354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.873365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.873603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.873826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.873834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.873842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.873850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.886524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.887126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.887163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.887175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.887417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.887639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.887649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.887662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.887670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.744 [2024-12-09 11:44:21.900358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.744 [2024-12-09 11:44:21.901059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-12-09 11:44:21.901098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:29.744 [2024-12-09 11:44:21.901109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:29.744 [2024-12-09 11:44:21.901347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:29.744 [2024-12-09 11:44:21.901569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.744 [2024-12-09 11:44:21.901578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.744 [2024-12-09 11:44:21.901586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.744 [2024-12-09 11:44:21.901594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.914284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.914942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.914990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.915237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.006 [2024-12-09 11:44:21.915460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.006 [2024-12-09 11:44:21.915469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.006 [2024-12-09 11:44:21.915477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.006 [2024-12-09 11:44:21.915485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.928158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.928847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.928885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.928896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.929143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.006 [2024-12-09 11:44:21.929365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.006 [2024-12-09 11:44:21.929375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.006 [2024-12-09 11:44:21.929383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.006 [2024-12-09 11:44:21.929391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.942061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.942744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.942782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.942793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.943040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.006 [2024-12-09 11:44:21.943263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.006 [2024-12-09 11:44:21.943272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.006 [2024-12-09 11:44:21.943280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.006 [2024-12-09 11:44:21.943288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.955966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.956622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.956660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.956671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.956909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.006 [2024-12-09 11:44:21.957139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.006 [2024-12-09 11:44:21.957149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.006 [2024-12-09 11:44:21.957158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.006 [2024-12-09 11:44:21.957166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.969828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.970500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.970538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.970548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.970786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.006 [2024-12-09 11:44:21.971008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.006 [2024-12-09 11:44:21.971026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.006 [2024-12-09 11:44:21.971034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.006 [2024-12-09 11:44:21.971043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.006 [2024-12-09 11:44:21.983711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.006 [2024-12-09 11:44:21.984398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-12-09 11:44:21.984437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.006 [2024-12-09 11:44:21.984452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.006 [2024-12-09 11:44:21.984690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:21.984912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:21.984922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:21.984929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:21.984937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:21.997629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:21.998295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:21.998333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:21.998344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:21.998581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:21.998804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:21.998813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:21.998821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:21.998828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.011512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.012200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.012238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.012249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.012487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.012710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.012719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.012726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.012734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.025410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.026058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.026096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.026108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.026349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.026576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.026586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.026595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.026606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.039321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.039796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.039816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.039824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.040050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.040269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.040279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.040287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.040294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.053174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.053842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.053880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.053891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.054135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.054359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.054368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.054376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.054384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.067056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.067647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.067667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.067675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.067894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.068119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.068129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.068142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.068149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.081016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.081700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.081738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.081750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.081988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.082219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.082229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.082237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.082245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.094967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.095603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.095641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.095652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.095890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.096121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.096131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.096139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.096147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.108821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.109555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.109567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.007 [2024-12-09 11:44:22.109805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.007 [2024-12-09 11:44:22.110042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.007 [2024-12-09 11:44:22.110052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.007 [2024-12-09 11:44:22.110060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.007 [2024-12-09 11:44:22.110068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.007 [2024-12-09 11:44:22.122741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.007 [2024-12-09 11:44:22.123313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-12-09 11:44:22.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.007 [2024-12-09 11:44:22.123341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.008 [2024-12-09 11:44:22.123560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.008 [2024-12-09 11:44:22.123778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.008 [2024-12-09 11:44:22.123787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.008 [2024-12-09 11:44:22.123794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.008 [2024-12-09 11:44:22.123801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.008 [2024-12-09 11:44:22.136683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.008 [2024-12-09 11:44:22.137184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-12-09 11:44:22.137202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.008 [2024-12-09 11:44:22.137210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.008 [2024-12-09 11:44:22.137428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.008 [2024-12-09 11:44:22.137646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.008 [2024-12-09 11:44:22.137654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.008 [2024-12-09 11:44:22.137661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.008 [2024-12-09 11:44:22.137668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.008 [2024-12-09 11:44:22.150544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.008 [2024-12-09 11:44:22.151092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-12-09 11:44:22.151109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.008 [2024-12-09 11:44:22.151117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.008 [2024-12-09 11:44:22.151334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.008 [2024-12-09 11:44:22.151552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.008 [2024-12-09 11:44:22.151560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.008 [2024-12-09 11:44:22.151567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.008 [2024-12-09 11:44:22.151574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.008 [2024-12-09 11:44:22.164451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.008 [2024-12-09 11:44:22.165119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-12-09 11:44:22.165157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.008 [2024-12-09 11:44:22.165174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.008 [2024-12-09 11:44:22.165416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.008 [2024-12-09 11:44:22.165638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.008 [2024-12-09 11:44:22.165647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.008 [2024-12-09 11:44:22.165656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.008 [2024-12-09 11:44:22.165664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.178344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.179008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.179054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.179067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.179307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.179529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.179539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.179547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.179556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.192242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.192796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.192815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.192823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.193048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.193266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.193276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.193283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.193290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.206193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.206720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.206737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.206745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.206963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.207193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.207202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.207209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.207216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.220162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.220729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.220778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.221025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.221249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.221258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.221266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.221274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.233965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.234482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.234501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.234509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.234728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.234946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.234954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.234961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.234968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.247894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.248548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.248597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.248836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.249067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.249078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.249090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.249098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.261793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.262392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.262412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.262420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.262638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.271 [2024-12-09 11:44:22.262856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.271 [2024-12-09 11:44:22.262865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.271 [2024-12-09 11:44:22.262872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.271 [2024-12-09 11:44:22.262879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.271 [2024-12-09 11:44:22.275778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.271 [2024-12-09 11:44:22.276176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-12-09 11:44:22.276194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.271 [2024-12-09 11:44:22.276202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.271 [2024-12-09 11:44:22.276421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.276639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.276647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.276654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.276661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.289561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.290124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.290162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.290174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.290415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.290637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.290646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.290654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.290662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.303458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.304160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.304209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.304447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.304669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.304678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.304686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.304694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.317376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.317933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.317952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.317961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.318185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.318404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.318413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.318420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.318428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.331299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.331964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.332002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.332020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.332258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.332481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.332490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.332498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.332506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.345180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.345763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.345795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.346021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.346240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.346248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.346255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.346262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.359141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.359675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.359692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.359700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.359917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.360141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.360150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.360157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.360164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.373041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.373697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.373708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.373946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.374177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.374187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.374194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.374202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.386873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.387419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.387439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.387447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.387666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.387889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.387897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.387904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.387911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.400802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.401443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.401482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.401493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.272 [2024-12-09 11:44:22.401732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.272 [2024-12-09 11:44:22.401954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.272 [2024-12-09 11:44:22.401964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.272 [2024-12-09 11:44:22.401971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.272 [2024-12-09 11:44:22.401979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.272 [2024-12-09 11:44:22.414694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.272 [2024-12-09 11:44:22.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-12-09 11:44:22.415425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.272 [2024-12-09 11:44:22.415436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.273 [2024-12-09 11:44:22.415673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.273 [2024-12-09 11:44:22.415896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.273 [2024-12-09 11:44:22.415906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.273 [2024-12-09 11:44:22.415913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.273 [2024-12-09 11:44:22.415921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.273 [2024-12-09 11:44:22.428606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.273 [2024-12-09 11:44:22.429300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-12-09 11:44:22.429338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.273 [2024-12-09 11:44:22.429349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.273 [2024-12-09 11:44:22.429587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.273 [2024-12-09 11:44:22.429809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.273 [2024-12-09 11:44:22.429818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.273 [2024-12-09 11:44:22.429831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.273 [2024-12-09 11:44:22.429839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.442509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.443093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.443113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.443121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.443340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.443559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.443567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.443575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.443581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.456286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.456821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.456839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.456847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.457070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.457288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.457297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.457304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.457311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.470185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.470815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.470852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.470863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.471110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.471333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.471342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.471350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.471359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.484034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.484588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.484607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.484615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.484834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.485058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.485067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.485074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.485080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.497960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.498497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.498514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.498522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.498739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.498957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.498965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.498972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.498979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.511865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.512406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.535 [2024-12-09 11:44:22.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.535 [2024-12-09 11:44:22.512430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.535 [2024-12-09 11:44:22.512648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.535 [2024-12-09 11:44:22.512865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.535 [2024-12-09 11:44:22.512873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.535 [2024-12-09 11:44:22.512881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.535 [2024-12-09 11:44:22.512888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.535 [2024-12-09 11:44:22.525764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.535 [2024-12-09 11:44:22.526391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.526450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.526688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.526911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.526920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.526928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.526936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.539610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.540176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.540196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.540204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.540423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.540641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.540650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.540657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.540664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.553546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.554236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.554274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.554285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.554523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.554745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.554754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.554763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.554771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.567451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.568118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.568167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.568405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.568632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.568641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.568649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.568657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.581335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.581950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.582173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.582392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.582401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.582408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.582415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.595293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.595825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.595842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.595850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.596083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.596303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.596312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.596319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.596326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.609193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.609818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.609856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.609867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.610114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.610337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.610347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.610360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.610368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.623055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.623597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.623635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.623648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.623889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.624121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.624131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.624139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.624147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.637028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.637573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.637593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.637601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.637819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.638042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.638052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.638059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.638066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.650937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.651565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.651603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.651614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.651852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.652084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.652094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.652102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.652110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.664858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.665459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.665478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.665486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.665705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.665923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.665931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.665939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.665946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.678825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.679272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.679290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.679297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.679515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.679733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.679741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.679748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.679755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.536 [2024-12-09 11:44:22.692633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.536 [2024-12-09 11:44:22.693321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.536 [2024-12-09 11:44:22.693358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.536 [2024-12-09 11:44:22.693369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.536 [2024-12-09 11:44:22.693607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.536 [2024-12-09 11:44:22.693830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.536 [2024-12-09 11:44:22.693839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.536 [2024-12-09 11:44:22.693847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.536 [2024-12-09 11:44:22.693855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.798 7140.75 IOPS, 27.89 MiB/s [2024-12-09T10:44:22.960Z] [2024-12-09 11:44:22.707571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.798 [2024-12-09 11:44:22.708268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.798 [2024-12-09 11:44:22.708306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.798 [2024-12-09 11:44:22.708322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.798 [2024-12-09 11:44:22.708560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.798 [2024-12-09 11:44:22.708782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.798 [2024-12-09 11:44:22.708791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.798 [2024-12-09 11:44:22.708799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.798 [2024-12-09 11:44:22.708807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.798 [2024-12-09 11:44:22.721487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.798 [2024-12-09 11:44:22.721963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.798 [2024-12-09 11:44:22.721982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.798 [2024-12-09 11:44:22.721990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.798 [2024-12-09 11:44:22.722216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.798 [2024-12-09 11:44:22.722434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.798 [2024-12-09 11:44:22.722443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.798 [2024-12-09 11:44:22.722450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.798 [2024-12-09 11:44:22.722458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.798 [2024-12-09 11:44:22.735318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.798 [2024-12-09 11:44:22.735895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.798 [2024-12-09 11:44:22.735912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.798 [2024-12-09 11:44:22.735920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.798 [2024-12-09 11:44:22.736143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.798 [2024-12-09 11:44:22.736361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.798 [2024-12-09 11:44:22.736370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.798 [2024-12-09 11:44:22.736377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.798 [2024-12-09 11:44:22.736384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.798 [2024-12-09 11:44:22.749245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.798 [2024-12-09 11:44:22.749815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.749832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.749840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.750063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.750285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.750293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.750300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.750307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.763172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.763691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.763708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.763715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.763933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.764156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.764165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.764173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.764180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.777046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.777588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.777595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.777812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.778035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.778045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.778052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.778059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.790969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.791519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.791556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.791567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.791806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.792041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.792052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.792065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.792073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.804764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.805405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.805443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.805454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.805691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.805914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.805923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.805931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.805939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.818647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.819294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.819333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.819343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.819581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.819803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.819813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.819821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.819828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.832519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.833223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.833261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.833273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.833513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.833735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.833744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.833752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.833760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.846437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.846984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.847003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.847018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.847237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.847455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.847463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.847470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.847477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.860344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.860874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.860892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.860899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.861123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.861342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.861350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.861357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.861363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.799 [2024-12-09 11:44:22.874258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.799 [2024-12-09 11:44:22.874911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.799 [2024-12-09 11:44:22.874949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.799 [2024-12-09 11:44:22.874960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.799 [2024-12-09 11:44:22.875207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.799 [2024-12-09 11:44:22.875429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.799 [2024-12-09 11:44:22.875438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.799 [2024-12-09 11:44:22.875446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.799 [2024-12-09 11:44:22.875454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.888115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.800 [2024-12-09 11:44:22.888793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.800 [2024-12-09 11:44:22.888831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.800 [2024-12-09 11:44:22.888846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.800 [2024-12-09 11:44:22.889093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.800 [2024-12-09 11:44:22.889316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.800 [2024-12-09 11:44:22.889325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.800 [2024-12-09 11:44:22.889333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.800 [2024-12-09 11:44:22.889341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.902020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.800 [2024-12-09 11:44:22.902678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.800 [2024-12-09 11:44:22.902716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.800 [2024-12-09 11:44:22.902728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.800 [2024-12-09 11:44:22.902966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.800 [2024-12-09 11:44:22.903197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.800 [2024-12-09 11:44:22.903207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.800 [2024-12-09 11:44:22.903215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.800 [2024-12-09 11:44:22.903223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.915887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.800 [2024-12-09 11:44:22.916522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.800 [2024-12-09 11:44:22.916560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.800 [2024-12-09 11:44:22.916570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.800 [2024-12-09 11:44:22.916809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.800 [2024-12-09 11:44:22.917039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.800 [2024-12-09 11:44:22.917049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.800 [2024-12-09 11:44:22.917057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.800 [2024-12-09 11:44:22.917065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.929732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.800 [2024-12-09 11:44:22.930380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.800 [2024-12-09 11:44:22.930419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.800 [2024-12-09 11:44:22.930430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.800 [2024-12-09 11:44:22.930668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.800 [2024-12-09 11:44:22.930895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.800 [2024-12-09 11:44:22.930904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.800 [2024-12-09 11:44:22.930912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.800 [2024-12-09 11:44:22.930920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.943589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.800 [2024-12-09 11:44:22.944233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.800 [2024-12-09 11:44:22.944270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:30.800 [2024-12-09 11:44:22.944281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:30.800 [2024-12-09 11:44:22.944519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:30.800 [2024-12-09 11:44:22.944741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.800 [2024-12-09 11:44:22.944751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.800 [2024-12-09 11:44:22.944759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.800 [2024-12-09 11:44:22.944767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.800 [2024-12-09 11:44:22.957457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.062 [2024-12-09 11:44:22.958042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.062 [2024-12-09 11:44:22.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.062 [2024-12-09 11:44:22.958071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.062 [2024-12-09 11:44:22.958289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.062 [2024-12-09 11:44:22.958508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.062 [2024-12-09 11:44:22.958516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.062 [2024-12-09 11:44:22.958523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.062 [2024-12-09 11:44:22.958530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.062 [2024-12-09 11:44:22.971405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.062 [2024-12-09 11:44:22.972023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.062 [2024-12-09 11:44:22.972060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.062 [2024-12-09 11:44:22.972072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.062 [2024-12-09 11:44:22.972313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.062 [2024-12-09 11:44:22.972535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.062 [2024-12-09 11:44:22.972544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.062 [2024-12-09 11:44:22.972556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.062 [2024-12-09 11:44:22.972564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.062 [2024-12-09 11:44:22.985228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.062 [2024-12-09 11:44:22.985882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.062 [2024-12-09 11:44:22.985920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.062 [2024-12-09 11:44:22.985931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.062 [2024-12-09 11:44:22.986178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.062 [2024-12-09 11:44:22.986401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.062 [2024-12-09 11:44:22.986410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.062 [2024-12-09 11:44:22.986418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.062 [2024-12-09 11:44:22.986425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.062 [2024-12-09 11:44:22.999099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.062 [2024-12-09 11:44:22.999766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.062 [2024-12-09 11:44:22.999804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.062 [2024-12-09 11:44:22.999815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.062 [2024-12-09 11:44:23.000062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.062 [2024-12-09 11:44:23.000285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.062 [2024-12-09 11:44:23.000294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.062 [2024-12-09 11:44:23.000302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.062 [2024-12-09 11:44:23.000310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.062 [2024-12-09 11:44:23.012983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.062 [2024-12-09 11:44:23.013667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.062 [2024-12-09 11:44:23.013705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.062 [2024-12-09 11:44:23.013716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.062 [2024-12-09 11:44:23.013954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.062 [2024-12-09 11:44:23.014185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.014195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.014203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.014211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.026875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.027368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.027406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.027419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.027658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.027880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.027890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.027898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.027908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.040801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.041495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.041534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.041545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.041783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.042004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.042021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.042029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.042037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.054710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.055356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.055394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.055405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.055642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.055865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.055874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.055882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.055889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.068562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.069113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.069134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.069148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.069367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.069585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.069593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.069600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.069607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.082500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.083127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.083165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.083175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.083413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.083635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.083644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.083652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.083660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.096339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.096963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.097001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.097020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.097258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.097481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.097490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.097498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.097507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.110186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.110855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.110893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.110904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.111161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.111388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.111398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.111405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.111414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.124073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.124730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.124768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.124779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.125026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.125249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.125258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.125265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.125273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.137935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.138614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.138652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.138663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.138901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.139133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.063 [2024-12-09 11:44:23.139143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.063 [2024-12-09 11:44:23.139150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.063 [2024-12-09 11:44:23.139158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.063 [2024-12-09 11:44:23.151820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.063 [2024-12-09 11:44:23.152454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-12-09 11:44:23.152492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.063 [2024-12-09 11:44:23.152503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.063 [2024-12-09 11:44:23.152741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.063 [2024-12-09 11:44:23.152963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.064 [2024-12-09 11:44:23.152972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.064 [2024-12-09 11:44:23.152984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.064 [2024-12-09 11:44:23.152992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.064 [2024-12-09 11:44:23.165668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.064 [2024-12-09 11:44:23.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-12-09 11:44:23.166368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.064 [2024-12-09 11:44:23.166379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.064 [2024-12-09 11:44:23.166616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.064 [2024-12-09 11:44:23.166839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.064 [2024-12-09 11:44:23.166847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.064 [2024-12-09 11:44:23.166855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.064 [2024-12-09 11:44:23.166864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.064 [2024-12-09 11:44:23.179541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.064 [2024-12-09 11:44:23.180218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-12-09 11:44:23.180256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.064 [2024-12-09 11:44:23.180267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.064 [2024-12-09 11:44:23.180505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.064 [2024-12-09 11:44:23.180726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.064 [2024-12-09 11:44:23.180735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.064 [2024-12-09 11:44:23.180743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.064 [2024-12-09 11:44:23.180751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.064 [2024-12-09 11:44:23.193426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.064 [2024-12-09 11:44:23.193972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-12-09 11:44:23.193991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.064 [2024-12-09 11:44:23.193999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.064 [2024-12-09 11:44:23.194224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.064 [2024-12-09 11:44:23.194443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.064 [2024-12-09 11:44:23.194451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.064 [2024-12-09 11:44:23.194458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.064 [2024-12-09 11:44:23.194465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.064 [2024-12-09 11:44:23.207367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.064 [2024-12-09 11:44:23.207955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-12-09 11:44:23.207972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.064 [2024-12-09 11:44:23.207980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.064 [2024-12-09 11:44:23.208204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.064 [2024-12-09 11:44:23.208423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.064 [2024-12-09 11:44:23.208431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.064 [2024-12-09 11:44:23.208438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.064 [2024-12-09 11:44:23.208444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.064 [2024-12-09 11:44:23.221348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.325 [2024-12-09 11:44:23.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-12-09 11:44:23.225051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.325 [2024-12-09 11:44:23.225061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.325 [2024-12-09 11:44:23.225285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.325 [2024-12-09 11:44:23.225504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.325 [2024-12-09 11:44:23.225513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.325 [2024-12-09 11:44:23.225520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.325 [2024-12-09 11:44:23.225528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.325 [2024-12-09 11:44:23.235309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.325 [2024-12-09 11:44:23.235866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-12-09 11:44:23.235884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.325 [2024-12-09 11:44:23.235892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.325 [2024-12-09 11:44:23.236117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.325 [2024-12-09 11:44:23.236336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.325 [2024-12-09 11:44:23.236344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.325 [2024-12-09 11:44:23.236352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.325 [2024-12-09 11:44:23.236359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.325 [2024-12-09 11:44:23.249248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.325 [2024-12-09 11:44:23.249824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-12-09 11:44:23.249841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.325 [2024-12-09 11:44:23.249853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.325 [2024-12-09 11:44:23.250075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.325 [2024-12-09 11:44:23.250294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.325 [2024-12-09 11:44:23.250302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.325 [2024-12-09 11:44:23.250309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.325 [2024-12-09 11:44:23.250316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.325 [2024-12-09 11:44:23.263215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.325 [2024-12-09 11:44:23.263747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-12-09 11:44:23.263764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.325 [2024-12-09 11:44:23.263772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.325 [2024-12-09 11:44:23.263990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.325 [2024-12-09 11:44:23.264215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.325 [2024-12-09 11:44:23.264223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.325 [2024-12-09 11:44:23.264230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.325 [2024-12-09 11:44:23.264238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.325 [2024-12-09 11:44:23.277312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.325 [2024-12-09 11:44:23.277842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-12-09 11:44:23.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.325 [2024-12-09 11:44:23.277866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.325 [2024-12-09 11:44:23.278115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.325 [2024-12-09 11:44:23.278338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.325 [2024-12-09 11:44:23.278346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.325 [2024-12-09 11:44:23.278353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.325 [2024-12-09 11:44:23.278360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.291255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.291833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.291850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.291859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.292082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.292310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.292318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.292325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.292332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.305222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.305807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.305815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.306039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.306258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.306266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.306273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.306280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.319168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.319741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.319758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.319765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.319983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.320208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.320217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.320224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.320231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.333177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.333650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.333666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.333673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.333892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.334117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.334126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.334136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.334143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.347026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.347553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.347571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.347578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.347796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.348020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.348028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.348035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.348042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.360935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.361512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.361530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.361537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.361755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.361972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.361980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.361987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.361994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.374706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.375234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.375252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.375259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.375476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.375694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.375702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.375710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.375716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.388591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.389313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.389351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.389362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.389600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.389822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.389831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.389839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.389847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.402532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.403142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.403180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.403191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.403429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.403652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.403661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.403669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.403677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.416363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.416882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-12-09 11:44:23.416920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.326 [2024-12-09 11:44:23.416931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.326 [2024-12-09 11:44:23.417178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.326 [2024-12-09 11:44:23.417402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.326 [2024-12-09 11:44:23.417411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.326 [2024-12-09 11:44:23.417418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.326 [2024-12-09 11:44:23.417426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.326 [2024-12-09 11:44:23.430304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.326 [2024-12-09 11:44:23.430887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-12-09 11:44:23.430906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.327 [2024-12-09 11:44:23.430919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.327 [2024-12-09 11:44:23.431154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.327 [2024-12-09 11:44:23.431374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.327 [2024-12-09 11:44:23.431382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.327 [2024-12-09 11:44:23.431390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.327 [2024-12-09 11:44:23.431396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.327 [2024-12-09 11:44:23.444264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.327 [2024-12-09 11:44:23.444913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-12-09 11:44:23.444950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.327 [2024-12-09 11:44:23.444961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.327 [2024-12-09 11:44:23.445208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.327 [2024-12-09 11:44:23.445431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.327 [2024-12-09 11:44:23.445440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.327 [2024-12-09 11:44:23.445448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.327 [2024-12-09 11:44:23.445456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.327 [2024-12-09 11:44:23.458126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.327 [2024-12-09 11:44:23.458815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-12-09 11:44:23.458853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.327 [2024-12-09 11:44:23.458864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.327 [2024-12-09 11:44:23.459111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.327 [2024-12-09 11:44:23.459335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.327 [2024-12-09 11:44:23.459344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.327 [2024-12-09 11:44:23.459351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.327 [2024-12-09 11:44:23.459359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.327 [2024-12-09 11:44:23.472029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.327 [2024-12-09 11:44:23.472705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-12-09 11:44:23.472742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.327 [2024-12-09 11:44:23.472753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.327 [2024-12-09 11:44:23.472992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.327 [2024-12-09 11:44:23.473226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.327 [2024-12-09 11:44:23.473236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.327 [2024-12-09 11:44:23.473244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.327 [2024-12-09 11:44:23.473252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.588 [2024-12-09 11:44:23.485919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.588 [2024-12-09 11:44:23.486512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.588 [2024-12-09 11:44:23.486532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.588 [2024-12-09 11:44:23.486540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.588 [2024-12-09 11:44:23.486758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.588 [2024-12-09 11:44:23.486976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.588 [2024-12-09 11:44:23.486984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.588 [2024-12-09 11:44:23.486992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.588 [2024-12-09 11:44:23.486999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.588 [2024-12-09 11:44:23.499701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.588 [2024-12-09 11:44:23.500368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.588 [2024-12-09 11:44:23.500406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.588 [2024-12-09 11:44:23.500416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.588 [2024-12-09 11:44:23.500654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.588 [2024-12-09 11:44:23.500876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.588 [2024-12-09 11:44:23.500885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.588 [2024-12-09 11:44:23.500893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.588 [2024-12-09 11:44:23.500901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.588 [2024-12-09 11:44:23.513583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.588 [2024-12-09 11:44:23.514305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.588 [2024-12-09 11:44:23.514343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.588 [2024-12-09 11:44:23.514354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.588 [2024-12-09 11:44:23.514592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.588 [2024-12-09 11:44:23.514814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.588 [2024-12-09 11:44:23.514823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.588 [2024-12-09 11:44:23.514835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.514844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.527516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.528124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.528162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.528175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.528416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.528638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.528647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.528654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.528663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.541347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.541994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.542039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.542050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.542288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.542510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.542519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.542527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.542535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.555208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.555885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.555923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.555934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.556181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.556404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.556414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.556421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.556429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.569098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.569743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.569781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.569791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.570037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.570260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.570270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.570278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.570286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.582955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.583635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.583673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.583684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.583922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.584152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.584162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.584170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.584178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.596850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.597406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.597444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.597455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.597694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.597915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.597924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.597932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.597940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.610624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.611326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.611364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.611379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.611617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.611840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.611849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.611857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.611865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.624552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.625143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.625181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.625193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.625434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.625657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.625666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.625674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.625682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.638359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.639057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.639094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.639105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.639343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.639566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.639574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.589 [2024-12-09 11:44:23.639583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.589 [2024-12-09 11:44:23.639591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.589 [2024-12-09 11:44:23.652264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.589 [2024-12-09 11:44:23.652925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.589 [2024-12-09 11:44:23.652963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.589 [2024-12-09 11:44:23.652974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.589 [2024-12-09 11:44:23.653220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.589 [2024-12-09 11:44:23.653448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.589 [2024-12-09 11:44:23.653457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.653464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.653472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 [2024-12-09 11:44:23.666147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.666736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.666755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.666763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.666982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.667207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.667216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.667223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.667230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 [2024-12-09 11:44:23.680096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.680733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.680771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.680782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.681029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.681252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.681261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.681269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.681277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 [2024-12-09 11:44:23.693872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.694439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.694459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.694466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.694685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.694903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.694911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.694923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.694931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 5712.60 IOPS, 22.31 MiB/s [2024-12-09T10:44:23.752Z] [2024-12-09 11:44:23.709298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.709847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.709885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.709895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.710143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.710366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.710375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.710383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.710392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 [2024-12-09 11:44:23.723076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.723755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.723793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.723805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.724052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.724275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.724284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.724292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.724300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.590 [2024-12-09 11:44:23.736969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.590 [2024-12-09 11:44:23.737513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.590 [2024-12-09 11:44:23.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.590 [2024-12-09 11:44:23.737542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.590 [2024-12-09 11:44:23.737761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.590 [2024-12-09 11:44:23.737979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.590 [2024-12-09 11:44:23.737987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.590 [2024-12-09 11:44:23.737994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.590 [2024-12-09 11:44:23.738001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.750882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.751509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.751548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.751559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.751797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.752027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.752037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.752045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.752053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.764737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.765300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.765320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.765328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.765547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.765765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.765774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.765781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.765789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.778665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.779330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.779368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.779380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.779622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.779844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.779854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.779862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.779871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.792557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.793241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.793295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.793533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.793756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.793765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.793773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.793781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.806489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.807240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.807277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.807293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.807530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.807753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.807762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.807770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.807778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.820261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.820920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.820958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.820971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.821221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.821443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.821453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.821460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.821468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.834146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.834827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.834864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.834875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.852 [2024-12-09 11:44:23.835120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.852 [2024-12-09 11:44:23.835348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.852 [2024-12-09 11:44:23.835357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.852 [2024-12-09 11:44:23.835366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.852 [2024-12-09 11:44:23.835374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.852 [2024-12-09 11:44:23.848047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.852 [2024-12-09 11:44:23.848636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-12-09 11:44:23.848656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.852 [2024-12-09 11:44:23.848664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.848882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.849107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.849117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.849124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.849132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.862016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.862648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.862685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.862696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.862935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.863167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.863178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.863186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.863193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.875868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.876425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.876446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.876454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.876672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.876890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.876898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.876910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.876917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.889800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.890430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.890448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.890457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.890675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.890893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.890902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.890909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.890917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.903621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.904183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.904201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.904208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.904426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.904644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.904652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.904659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.904666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.917551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.918153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.918191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.918204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.918443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.918665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.918674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.918682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.918690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.931368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.931933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.931952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.931961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.932184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.932403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.932411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.932418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.932425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.945307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.945823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.945860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.945871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.946117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.946341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.946350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.946358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.946365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.959257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.959922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.959972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.960222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.960445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.960454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.960462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.960470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.973151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.973820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.973857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.973876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.974122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.974345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.853 [2024-12-09 11:44:23.974354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.853 [2024-12-09 11:44:23.974362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.853 [2024-12-09 11:44:23.974370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.853 [2024-12-09 11:44:23.987046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.853 [2024-12-09 11:44:23.987710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-12-09 11:44:23.987748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.853 [2024-12-09 11:44:23.987759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.853 [2024-12-09 11:44:23.987997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.853 [2024-12-09 11:44:23.988229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.854 [2024-12-09 11:44:23.988239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.854 [2024-12-09 11:44:23.988247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.854 [2024-12-09 11:44:23.988255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.854 [2024-12-09 11:44:24.000935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.854 [2024-12-09 11:44:24.001577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-12-09 11:44:24.001615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:31.854 [2024-12-09 11:44:24.001627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:31.854 [2024-12-09 11:44:24.001867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:31.854 [2024-12-09 11:44:24.002096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.854 [2024-12-09 11:44:24.002107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.854 [2024-12-09 11:44:24.002114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.854 [2024-12-09 11:44:24.002122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.014802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.015490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.015529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.015540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.015778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.016005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.115 [2024-12-09 11:44:24.016024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.115 [2024-12-09 11:44:24.016032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.115 [2024-12-09 11:44:24.016040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.028712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.029370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.029408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.029419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.029657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.029879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.115 [2024-12-09 11:44:24.029890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.115 [2024-12-09 11:44:24.029897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.115 [2024-12-09 11:44:24.029906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.042585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.043123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.043151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.043369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.043587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.115 [2024-12-09 11:44:24.043596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.115 [2024-12-09 11:44:24.043604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.115 [2024-12-09 11:44:24.043612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.056495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.057116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.057154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.057167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.057406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.057628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.115 [2024-12-09 11:44:24.057637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.115 [2024-12-09 11:44:24.057649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.115 [2024-12-09 11:44:24.057657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.070342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.070878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.070898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.070906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.071161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.071381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.115 [2024-12-09 11:44:24.071389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.115 [2024-12-09 11:44:24.071396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.115 [2024-12-09 11:44:24.071403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.115 [2024-12-09 11:44:24.084275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.115 [2024-12-09 11:44:24.084852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.115 [2024-12-09 11:44:24.084869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.115 [2024-12-09 11:44:24.084876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.115 [2024-12-09 11:44:24.085098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.115 [2024-12-09 11:44:24.085318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.085327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.085334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.085341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.098215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.098857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.098895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.098906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.099151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.099375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.099384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.099392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.099400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.112146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.112668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.112707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.112718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.112956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.113187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.113197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.113205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.113213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.126093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.126716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.126754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.126765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.127004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.127235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.127244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.127252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.127260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.139927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.140570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.140608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.140619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.140857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.141086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.141096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.141104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.141112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.153788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.154355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.154375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.154388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.154607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.154825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.154833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.154840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.154847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.167722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.168285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.168293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.168511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.168728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.168737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.168744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.168751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.181627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.182302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.182341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.182352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.182590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.182812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.182821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.182829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.182837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.195529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.196132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.196169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.196182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.196423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.196650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.196660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.196668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.196676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.116 [2024-12-09 11:44:24.209360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.116 [2024-12-09 11:44:24.209900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.116 [2024-12-09 11:44:24.209920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.116 [2024-12-09 11:44:24.209928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.116 [2024-12-09 11:44:24.210153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.116 [2024-12-09 11:44:24.210372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.116 [2024-12-09 11:44:24.210380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.116 [2024-12-09 11:44:24.210387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.116 [2024-12-09 11:44:24.210393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.117 [2024-12-09 11:44:24.223283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.117 [2024-12-09 11:44:24.223991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.117 [2024-12-09 11:44:24.224036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.117 [2024-12-09 11:44:24.224047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.117 [2024-12-09 11:44:24.224285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.117 [2024-12-09 11:44:24.224507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.117 [2024-12-09 11:44:24.224517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.117 [2024-12-09 11:44:24.224525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.117 [2024-12-09 11:44:24.224532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.117 [2024-12-09 11:44:24.237207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.117 [2024-12-09 11:44:24.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.117 [2024-12-09 11:44:24.237782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.117 [2024-12-09 11:44:24.237793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.117 [2024-12-09 11:44:24.238038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.117 [2024-12-09 11:44:24.238261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.117 [2024-12-09 11:44:24.238270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.117 [2024-12-09 11:44:24.238282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.117 [2024-12-09 11:44:24.238290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.117 [2024-12-09 11:44:24.251171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.117 [2024-12-09 11:44:24.251610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.117 [2024-12-09 11:44:24.251629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.117 [2024-12-09 11:44:24.251637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.117 [2024-12-09 11:44:24.251855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.117 [2024-12-09 11:44:24.252080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.117 [2024-12-09 11:44:24.252089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.117 [2024-12-09 11:44:24.252096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.117 [2024-12-09 11:44:24.252103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3710426 Killed "${NVMF_APP[@]}" "$@" 00:29:32.117 [2024-12-09 11:44:24.264983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:32.117 [2024-12-09 11:44:24.265620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.117 [2024-12-09 11:44:24.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.117 [2024-12-09 11:44:24.265669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.117 [2024-12-09 11:44:24.265908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.117 [2024-12-09 11:44:24.266138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.117 [2024-12-09 11:44:24.266149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.117 [2024-12-09 11:44:24.266158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.117 [2024-12-09 11:44:24.266167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3712055 00:29:32.117 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3712055 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3712055 ']' 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.378 11:44:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.378 [2024-12-09 11:44:24.278824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.378 [2024-12-09 11:44:24.279412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.378 [2024-12-09 11:44:24.279451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.378 [2024-12-09 11:44:24.279463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.378 [2024-12-09 11:44:24.279701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.378 [2024-12-09 11:44:24.279925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.378 [2024-12-09 11:44:24.279934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.378 [2024-12-09 11:44:24.279943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.378 [2024-12-09 11:44:24.279952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.378 [2024-12-09 11:44:24.292633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.378 [2024-12-09 11:44:24.293217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.378 [2024-12-09 11:44:24.293238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.378 [2024-12-09 11:44:24.293246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.378 [2024-12-09 11:44:24.293465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.378 [2024-12-09 11:44:24.293684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.293693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.293700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.293707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.306600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.307279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.307317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.307328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.307566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.307789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.307798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.307806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.307819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.320550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.321105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.321143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.321156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.321398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.321620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.321630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.321638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.321647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.326075] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:32.379 [2024-12-09 11:44:24.326127] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.379 [2024-12-09 11:44:24.334335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.335008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.335066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.335304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.335527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.335537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.335545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.335554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.348234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.348800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.348820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.348829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.349053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.349272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.349281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.349288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.349301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.362198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.362921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.362933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.363179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.363403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.363412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.363420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.363428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.375990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.376687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.376726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.376737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.376976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.377207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.377217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.377225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.377233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.389912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.390559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.390597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.390608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.390846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.391077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.379 [2024-12-09 11:44:24.391087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.379 [2024-12-09 11:44:24.391095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.379 [2024-12-09 11:44:24.391103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.379 [2024-12-09 11:44:24.403787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.379 [2024-12-09 11:44:24.404341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.379 [2024-12-09 11:44:24.404379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.379 [2024-12-09 11:44:24.404392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.379 [2024-12-09 11:44:24.404632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.379 [2024-12-09 11:44:24.404854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.404863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.404871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.404879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.416670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.380 [2024-12-09 11:44:24.417564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.418229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.418267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.418278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.418516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.418739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.418748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.418756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.418764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.431466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.432040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.432061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.432070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.432289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.432507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.432516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.432523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.432531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.445412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.445967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.380 [2024-12-09 11:44:24.445989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.380 [2024-12-09 11:44:24.445999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.380 [2024-12-09 11:44:24.446005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.380 [2024-12-09 11:44:24.446015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.380 [2024-12-09 11:44:24.446147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.446186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.446199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.446442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.446664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.446672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.446680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.446689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.447037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.380 [2024-12-09 11:44:24.447190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.380 [2024-12-09 11:44:24.447191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.380 [2024-12-09 11:44:24.459388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.460082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.460121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.460133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.460374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.460597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.460606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.460614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.460622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.473302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.473878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.473928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.474176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.474399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.474409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.474418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.474432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.487108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.487797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.487835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.487846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.488094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.488318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.488328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.488336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.488344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.501028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.501725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.501763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.501774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.502020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.502258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.502268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.502277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.502285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.514973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.515653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.515692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.515703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.515941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.516172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.516182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.380 [2024-12-09 11:44:24.516190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.380 [2024-12-09 11:44:24.516198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.380 [2024-12-09 11:44:24.528921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.380 [2024-12-09 11:44:24.529473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.380 [2024-12-09 11:44:24.529511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.380 [2024-12-09 11:44:24.529523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.380 [2024-12-09 11:44:24.529761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.380 [2024-12-09 11:44:24.529984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.380 [2024-12-09 11:44:24.530000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.381 [2024-12-09 11:44:24.530008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.381 [2024-12-09 11:44:24.530025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.542903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.543561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.543600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.543612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.543850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.544081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.544092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.544100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.544108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.556783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.557382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.557401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.557410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.557629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.557847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.557856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.557863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.557870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.570747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.571388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.571426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.571438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.571684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.571907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.571916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.571924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.571932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.584607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.585181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.585219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.585231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.585472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.585694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.585704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.585711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.585719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.598402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.599079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.599117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.599130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.599369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.599592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.599601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.599609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.599617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.612319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.612980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.613025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.613037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.613275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.642 [2024-12-09 11:44:24.613497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.642 [2024-12-09 11:44:24.613510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.642 [2024-12-09 11:44:24.613518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.642 [2024-12-09 11:44:24.613526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.642 [2024-12-09 11:44:24.626210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.642 [2024-12-09 11:44:24.626909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.642 [2024-12-09 11:44:24.626947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.642 [2024-12-09 11:44:24.626958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.642 [2024-12-09 11:44:24.627205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.627428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.627437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.627446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.627454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.640124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.640818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.640856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.640867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.641113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.641336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.641345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.641353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.641362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.654039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.654484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.654505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.654513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.654731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.654950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.654958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.654965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.654976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.667847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.668514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.668552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.668564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.668802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.669031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.669041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.669049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.669057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.681730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.682276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.682314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.682325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.682563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.682785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.682795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.682803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.682810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.695695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.696426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.696464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.696475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.696713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.696935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.696944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.696953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.696961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 4760.50 IOPS, 18.60 MiB/s [2024-12-09T10:44:24.805Z] [2024-12-09 11:44:24.711448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.712112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.712150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.712163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.712404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.712627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.712637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.712645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.712653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.725341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.726086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.726124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.726137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.726378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.726600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.726609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.726617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.726625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.739124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.739736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.739756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.739764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.739983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.740208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.740218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.643 [2024-12-09 11:44:24.740225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.643 [2024-12-09 11:44:24.740232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.643 [2024-12-09 11:44:24.752889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.643 [2024-12-09 11:44:24.753556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.643 [2024-12-09 11:44:24.753594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.643 [2024-12-09 11:44:24.753609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.643 [2024-12-09 11:44:24.753847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.643 [2024-12-09 11:44:24.754077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.643 [2024-12-09 11:44:24.754088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.644 [2024-12-09 11:44:24.754096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.644 [2024-12-09 11:44:24.754103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.644 [2024-12-09 11:44:24.766780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.644 [2024-12-09 11:44:24.767477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.644 [2024-12-09 11:44:24.767514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.644 [2024-12-09 11:44:24.767525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.644 [2024-12-09 11:44:24.767763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.644 [2024-12-09 11:44:24.767986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.644 [2024-12-09 11:44:24.767995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.644 [2024-12-09 11:44:24.768003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.644 [2024-12-09 11:44:24.768018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.644 [2024-12-09 11:44:24.780738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.644 [2024-12-09 11:44:24.781261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.644 [2024-12-09 11:44:24.781282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.644 [2024-12-09 11:44:24.781290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.644 [2024-12-09 11:44:24.781509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.644 [2024-12-09 11:44:24.781727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.644 [2024-12-09 11:44:24.781736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.644 [2024-12-09 11:44:24.781744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.644 [2024-12-09 11:44:24.781750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.644 [2024-12-09 11:44:24.794623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.644 [2024-12-09 11:44:24.795305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.644 [2024-12-09 11:44:24.795344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.644 [2024-12-09 11:44:24.795355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.644 [2024-12-09 11:44:24.795594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.644 [2024-12-09 11:44:24.795821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.644 [2024-12-09 11:44:24.795830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.644 [2024-12-09 11:44:24.795838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.644 [2024-12-09 11:44:24.795845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.904 [2024-12-09 11:44:24.808533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.904 [2024-12-09 11:44:24.809084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-12-09 11:44:24.809105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.904 [2024-12-09 11:44:24.809113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.904 [2024-12-09 11:44:24.809332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.904 [2024-12-09 11:44:24.809550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.904 [2024-12-09 11:44:24.809559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.904 [2024-12-09 11:44:24.809566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.904 [2024-12-09 11:44:24.809573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.904 [2024-12-09 11:44:24.822449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.904 [2024-12-09 11:44:24.823104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-12-09 11:44:24.823142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.904 [2024-12-09 11:44:24.823154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.904 [2024-12-09 11:44:24.823393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.904 [2024-12-09 11:44:24.823615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.904 [2024-12-09 11:44:24.823624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.904 [2024-12-09 11:44:24.823632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.904 [2024-12-09 11:44:24.823640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.904 [2024-12-09 11:44:24.836315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.904 [2024-12-09 11:44:24.836856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-12-09 11:44:24.836893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.904 [2024-12-09 11:44:24.836904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.904 [2024-12-09 11:44:24.837150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.904 [2024-12-09 11:44:24.837373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.837382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.837390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.837403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.850283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.850932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.850970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.850982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.851231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.851454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.851464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.851471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.851479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.864157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.864775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.864783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.865001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.865227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.865237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.865244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.865251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.878131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.878775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.878814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.878825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.879071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.879295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.879305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.879313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.879322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.891994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.892461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.892499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.892511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.892751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.892973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.892982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.892991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.892998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.905892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.906540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.906579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.906590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.906828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.907059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.907069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.907077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.907085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.919755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.920451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.920489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.920500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.920739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.920961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.920970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.920977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.920985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.933661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.934304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.934319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.934557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.934780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.934789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.934796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.934804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.947511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.948138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.948176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.948188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.948430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.948652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.948661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.948669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.948677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.961366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.961933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.961952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.961960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.962185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.962403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.962412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.962419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.905 [2024-12-09 11:44:24.962426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.905 [2024-12-09 11:44:24.975293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.905 [2024-12-09 11:44:24.975950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-12-09 11:44:24.975989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.905 [2024-12-09 11:44:24.976001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.905 [2024-12-09 11:44:24.976248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.905 [2024-12-09 11:44:24.976476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.905 [2024-12-09 11:44:24.976485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.905 [2024-12-09 11:44:24.976493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:24.976501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:24.989176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:24.989819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:24.989858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:24.989869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:24.990116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:24.990339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:24.990348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:24.990356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:24.990364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:25.003039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:25.003745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:25.003783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:25.003795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:25.004043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:25.004277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:25.004287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:25.004295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:25.004303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:25.016985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:25.017672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:25.017710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:25.017722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:25.017960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:25.018191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:25.018201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:25.018209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:25.018221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:25.030890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:25.031407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:25.031445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:25.031457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:25.031694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:25.031916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:25.031926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:25.031934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:25.031942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:25.044827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:25.045388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:25.045426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:25.045439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:25.045681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:25.045903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:25.045912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:25.045920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:25.045928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.906 [2024-12-09 11:44:25.058614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.906 [2024-12-09 11:44:25.059316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-12-09 11:44:25.059355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:32.906 [2024-12-09 11:44:25.059366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:32.906 [2024-12-09 11:44:25.059604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:32.906 [2024-12-09 11:44:25.059827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.906 [2024-12-09 11:44:25.059836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.906 [2024-12-09 11:44:25.059844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.906 [2024-12-09 11:44:25.059852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 [2024-12-09 11:44:25.072529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.073157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.073196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.073208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.073446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.073669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.073679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.167 [2024-12-09 11:44:25.073687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.167 [2024-12-09 11:44:25.073695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 [2024-12-09 11:44:25.086370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.087065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.087103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.087116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.087355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.087578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.087587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.167 [2024-12-09 11:44:25.087596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.167 [2024-12-09 11:44:25.087604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 [2024-12-09 11:44:25.100287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.100988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.101033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.101045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.101283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.101505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.101514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.167 [2024-12-09 11:44:25.101522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.167 [2024-12-09 11:44:25.101530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 [2024-12-09 11:44:25.114217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.114897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.114935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.114952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.115197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.115420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.115429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.167 [2024-12-09 11:44:25.115437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.167 [2024-12-09 11:44:25.115445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.167 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:33.167 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.167 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.167 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.167 [2024-12-09 11:44:25.128117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.128712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.128731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.128739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.128958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.129181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.129190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.167 [2024-12-09 11:44:25.129198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.167 [2024-12-09 11:44:25.129204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.167 [2024-12-09 11:44:25.142073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.167 [2024-12-09 11:44:25.142711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.167 [2024-12-09 11:44:25.142750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.167 [2024-12-09 11:44:25.142761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.167 [2024-12-09 11:44:25.142999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.167 [2024-12-09 11:44:25.143229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.167 [2024-12-09 11:44:25.143239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.143247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.143255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 [2024-12-09 11:44:25.155964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.156648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.156690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.156701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.156940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.157170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.157180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.157188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.157196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.168 [2024-12-09 11:44:25.169858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.170550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.170562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.170800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.171030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.171040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.171048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.171056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 [2024-12-09 11:44:25.172435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.168 [2024-12-09 11:44:25.183724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.184425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.184464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.184475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.184713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.184935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.184944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.184957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.184965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 [2024-12-09 11:44:25.197646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.198057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.198084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.198093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.198317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.198537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.198546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.198553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.198561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 Malloc0 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.168 [2024-12-09 11:44:25.211448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.212154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.212192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.212204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.212447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.212669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.212678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.212686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.212695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.168 [2024-12-09 11:44:25.225372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.168 [2024-12-09 11:44:25.225786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.168 [2024-12-09 11:44:25.225806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de780 with addr=10.0.0.2, port=4420 00:29:33.168 [2024-12-09 11:44:25.225819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de780 is same with the state(6) to be set 00:29:33.168 [2024-12-09 11:44:25.226046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de780 (9): Bad file descriptor 00:29:33.168 [2024-12-09 11:44:25.226265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.168 [2024-12-09 11:44:25.226274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.168 [2024-12-09 11:44:25.226282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.168 [2024-12-09 11:44:25.226289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.168 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.169 [2024-12-09 11:44:25.235836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.169 [2024-12-09 11:44:25.239159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.169 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.169 11:44:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3710793 00:29:33.169 [2024-12-09 11:44:25.305123] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:34.678 4754.43 IOPS, 18.57 MiB/s [2024-12-09T10:44:27.782Z] 5560.88 IOPS, 21.72 MiB/s [2024-12-09T10:44:28.738Z] 6196.44 IOPS, 24.20 MiB/s [2024-12-09T10:44:30.123Z] 6699.00 IOPS, 26.17 MiB/s [2024-12-09T10:44:31.065Z] 7129.82 IOPS, 27.85 MiB/s [2024-12-09T10:44:32.008Z] 7467.42 IOPS, 29.17 MiB/s [2024-12-09T10:44:32.951Z] 7760.15 IOPS, 30.31 MiB/s [2024-12-09T10:44:33.893Z] 8029.86 IOPS, 31.37 MiB/s 00:29:41.731 Latency(us) 00:29:41.731 [2024-12-09T10:44:33.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.731 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:41.731 Verification LBA range: start 0x0 length 0x4000 00:29:41.731 Nvme1n1 : 15.01 8240.94 32.19 9913.30 0.00 7025.47 525.65 14745.60 00:29:41.731 [2024-12-09T10:44:33.893Z] =================================================================================================================== 00:29:41.731 [2024-12-09T10:44:33.893Z] Total : 8240.94 32.19 9913.30 0.00 7025.47 525.65 14745.60 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.731 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.731 rmmod nvme_tcp 00:29:41.731 rmmod nvme_fabrics 00:29:41.992 rmmod nvme_keyring 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3712055 ']' 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3712055 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3712055 ']' 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3712055 00:29:41.992 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3712055 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3712055' 00:29:41.993 killing process with pid 3712055 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3712055 00:29:41.993 11:44:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3712055 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.993 11:44:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.540 00:29:44.540 real 0m27.541s 00:29:44.540 user 1m1.129s 00:29:44.540 sys 0m7.489s 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.540 ************************************ 00:29:44.540 END TEST nvmf_bdevperf 00:29:44.540 ************************************ 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.540 ************************************ 00:29:44.540 START TEST nvmf_target_disconnect 00:29:44.540 ************************************ 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.540 * Looking for test storage... 00:29:44.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.540 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:44.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.540 --rc genhtml_branch_coverage=1 00:29:44.540 --rc genhtml_function_coverage=1 00:29:44.540 --rc genhtml_legend=1 00:29:44.541 --rc geninfo_all_blocks=1 00:29:44.541 --rc geninfo_unexecuted_blocks=1 00:29:44.541 00:29:44.541 ' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:44.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.541 --rc genhtml_branch_coverage=1 00:29:44.541 --rc genhtml_function_coverage=1 00:29:44.541 --rc genhtml_legend=1 00:29:44.541 --rc geninfo_all_blocks=1 00:29:44.541 --rc geninfo_unexecuted_blocks=1 00:29:44.541 00:29:44.541 ' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:44.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.541 --rc genhtml_branch_coverage=1 00:29:44.541 --rc genhtml_function_coverage=1 00:29:44.541 --rc genhtml_legend=1 00:29:44.541 --rc geninfo_all_blocks=1 00:29:44.541 --rc geninfo_unexecuted_blocks=1 00:29:44.541 00:29:44.541 ' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:44.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.541 --rc genhtml_branch_coverage=1 00:29:44.541 --rc genhtml_function_coverage=1 00:29:44.541 --rc genhtml_legend=1 00:29:44.541 --rc geninfo_all_blocks=1 00:29:44.541 --rc geninfo_unexecuted_blocks=1 00:29:44.541 00:29:44.541 ' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.541 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:52.678 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:52.678 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:52.678 Found net devices under 0000:31:00.0: cvl_0_0 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:52.678 Found net devices under 0000:31:00.1: cvl_0_1 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.678 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:29:52.679 00:29:52.679 --- 10.0.0.2 ping statistics --- 00:29:52.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.679 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:29:52.679 00:29:52.679 --- 10.0.0.1 ping statistics --- 00:29:52.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.679 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 ************************************ 00:29:52.679 START TEST nvmf_target_disconnect_tc1 00:29:52.679 ************************************ 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:52.679 11:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.679 [2024-12-09 11:44:44.050825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-12-09 11:44:44.050898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dd00 with addr=10.0.0.2, port=4420 00:29:52.679 [2024-12-09 11:44:44.050938] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:52.679 [2024-12-09 11:44:44.050960] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.679 [2024-12-09 11:44:44.050968] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:52.679 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:52.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:52.679 Initializing NVMe Controllers 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:52.679 00:29:52.679 real 0m0.129s 00:29:52.679 user 0m0.062s 00:29:52.679 sys 0m0.067s 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 ************************************ 00:29:52.679 END TEST nvmf_target_disconnect_tc1 00:29:52.679 ************************************ 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 ************************************ 00:29:52.679 START TEST nvmf_target_disconnect_tc2 00:29:52.679 ************************************ 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3718238 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3718238 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3718238 ']' 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.679 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.680 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.680 11:44:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.680 [2024-12-09 11:44:44.220212] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:52.680 [2024-12-09 11:44:44.220271] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.680 [2024-12-09 11:44:44.321782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.680 [2024-12-09 11:44:44.373131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.680 [2024-12-09 11:44:44.373184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.680 [2024-12-09 11:44:44.373192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.680 [2024-12-09 11:44:44.373200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.680 [2024-12-09 11:44:44.373206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.680 [2024-12-09 11:44:44.375572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.680 [2024-12-09 11:44:44.375732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.680 [2024-12-09 11:44:44.375893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.680 [2024-12-09 11:44:44.375894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:52.941 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.941 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:52.941 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.941 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.941 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.942 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.942 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.942 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.942 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 Malloc0 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 [2024-12-09 11:44:45.122503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 [2024-12-09 11:44:45.150878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.202 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3718392 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.203 11:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.124 11:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3718238 00:29:55.124 11:44:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Read completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 Write completed with error (sct=0, sc=8) 00:29:55.124 starting I/O failed 00:29:55.124 [2024-12-09 11:44:47.179166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.124 [2024-12-09 11:44:47.179612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.179659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.179900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.179912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.180353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.180388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.180628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.180816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.180828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.180939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.180949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.181406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.181416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.181710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.181720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.181883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.181893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.182083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.182094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.182315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.182326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-12-09 11:44:47.182635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-12-09 11:44:47.182644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.182822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.182832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.183167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.183508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.183518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.183843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.183852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.184086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.184097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.184473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.184483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.184787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.184796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.185135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.185145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.185450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.185460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.185749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.185759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.186044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.186053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.186472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.186483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.186804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.187158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.187170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.187544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.187555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.187883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.187892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.188202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.188214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.188543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.188748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.188758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.188967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.188977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.189345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.189549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.189559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.189890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.189900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.190311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.190321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.190649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.190659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.190994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.191004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.191318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.191328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.191650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.191660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.191951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.191961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.192358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.192368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.192560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.192571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.192830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.192840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.193168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.193178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.193388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.193399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.193732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.193742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.193928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.193939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.194325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.194336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.194659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.194669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-12-09 11:44:47.194850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-12-09 11:44:47.194859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.195186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.195196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.195518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.195527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.195819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.195828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.196134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.196144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.196453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.196463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.196794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.196805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.197123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.197133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.197430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.197439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.197750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.197759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.198059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.198069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.198408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.198418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.198791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.198801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.199040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.199050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.199366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.199375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.199683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.199695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.199977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.199986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.200152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.200162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.200395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.200406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.200697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.200707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.200991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.201312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.201322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.201659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.201670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.201964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.201975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.202249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.202260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.202534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.202544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.202708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.202720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.203111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.203425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.203435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.203652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.203968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.203978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.204281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.204291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.204597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.204607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.204893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.204904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.205095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.205105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.205302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.205311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.205495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.205505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.205799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.205809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-12-09 11:44:47.206133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-12-09 11:44:47.206143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.206426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.206441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.206720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.206729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.207030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.207040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.207344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.207354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.207551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.207562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.207902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.207912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.208309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.208319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.208625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.208924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.208934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.209250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.209260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.209539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.209548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.209832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.209841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.210166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.210177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.210473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.210484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.210613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.210622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.210913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.210923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.211290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.211302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.211598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.211616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.211904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.211914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.212274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.212284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.212595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.212605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.212890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.212900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.213231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.213242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.213407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.213417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.213724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.213734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.214025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.214035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.214343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.214353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.214672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.214683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.214984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.214995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.215340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.215351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.215663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.215673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.215825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.215835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.216154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.216164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.216491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.216501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.216808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.216818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.217117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.217127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.217411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.217421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.217707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.217717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-12-09 11:44:47.218030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-12-09 11:44:47.218040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.218400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.218411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.218758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.218936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.219929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.219952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.220274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.220631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.220641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.220808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.220820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.221118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.221533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.221543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.221825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.221835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.222188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.222199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.222493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.222503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.222813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.223144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.223155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.223463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.223473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.223786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.223796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.224136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.224146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.224500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.224512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.224837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.224847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.225169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.225179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.225363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.225373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.225745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.225756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.226120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.226130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.226447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.226457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.226768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.226778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.227064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.227073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.227365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.227376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.227677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.227687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.228019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.228030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.228342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.228352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.228741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.228750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.229183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-12-09 11:44:47.229193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-12-09 11:44:47.229542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.229552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.229867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.229877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.230038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.230049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.230396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.230406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.230711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.230721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.231041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.231051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.231358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.231374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.231679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.231689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.232001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.232015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.232336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.232346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.232661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.232943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.232953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.233333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.233344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.233524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.233535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.233854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.233864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.234146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.234156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.234467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.234477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.234781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.234790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.235081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.235091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.235410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.235420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.235701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.235711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.236007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.236020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.236332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.236342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.236677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.236686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.236908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.236917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.237192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.237204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.237506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.237516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.237803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.237814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.238117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.238128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.238479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.238489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.238812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.238822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.239099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.239109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.239423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.239433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.239596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.239607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.239935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.239945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.240182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.240193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.240513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.240524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.240829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.240839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.241208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.129 [2024-12-09 11:44:47.241218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.129 qpair failed and we were unable to recover it. 00:29:55.129 [2024-12-09 11:44:47.241485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.241495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.241658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.241668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.242003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.242015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.242329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.242339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.242659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.242669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.242972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.243277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.243287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.243590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.243600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.243877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.243886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.244184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.244194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.244510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.244519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.244806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.244816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.245119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.245445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.245455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.245797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.245806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.246088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.246098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.246266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.246278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.246605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.246891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.246902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.247222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.247232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.247607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.247616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.247919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.247928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.248236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.248246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.248621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.248933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.249284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.249294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.249664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.249676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.249931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.249941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.250267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.250277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.250605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.250615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.250906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.250916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.251218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.251229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.251519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.251530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.251816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.251825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.252136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.252146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.252339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.252348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.252506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.252516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.252851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.252861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.253166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.130 [2024-12-09 11:44:47.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.130 qpair failed and we were unable to recover it. 00:29:55.130 [2024-12-09 11:44:47.253485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.253497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.253776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.253794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.254118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.254128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.254450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.254460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.254676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.254685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.255000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.255014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.255330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.255606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.255617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.255945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.255956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.256268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.256570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.256587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.256916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.256926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.257235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.257246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.257563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.257861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.257870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.258060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.258070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.258395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.258405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.258698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.258709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.259032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.259340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.259349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.259658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.259668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.259855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.259865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.260084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.260094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.260263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.260273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.260579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.260879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.260889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.261082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.261092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.261424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.261732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.261916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.261927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.262218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.262228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.262546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.262556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.262888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.262899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.263198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.263209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.263472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.263482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.263815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.263826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.264126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.264451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.264461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.264759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.131 [2024-12-09 11:44:47.264768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.131 qpair failed and we were unable to recover it. 00:29:55.131 [2024-12-09 11:44:47.265055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.265065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.265252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.265262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.265462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.265473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.266094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.266425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.266435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.266778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.266788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.267100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.267110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.267413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.267422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.267713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.267722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.267924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.267933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.268222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.268232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.268541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.268550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.268853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.268863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.269231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.269535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.269546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.269849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.269858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.270175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.270479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.270489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.270799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.270808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.270978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.270988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.271342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.271352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.271683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.271693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.271980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.271990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.272332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.272343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.272635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.272645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.272956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.273138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.273150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.273341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.273728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.273739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.274074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.274084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.274430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.274439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.274725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.274735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.275032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.275042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.276025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.276046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.276346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.277206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.277225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.277421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.277651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.132 [2024-12-09 11:44:47.277660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.132 qpair failed and we were unable to recover it. 00:29:55.132 [2024-12-09 11:44:47.277992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.133 [2024-12-09 11:44:47.278002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.133 qpair failed and we were unable to recover it. 00:29:55.133 [2024-12-09 11:44:47.278339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.133 [2024-12-09 11:44:47.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.133 qpair failed and we were unable to recover it. 00:29:55.133 [2024-12-09 11:44:47.278659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.133 [2024-12-09 11:44:47.278669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.133 qpair failed and we were unable to recover it. 00:29:55.133 [2024-12-09 11:44:47.279015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.133 [2024-12-09 11:44:47.279025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.133 qpair failed and we were unable to recover it. 00:29:55.409 [2024-12-09 11:44:47.279333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-12-09 11:44:47.279344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-12-09 11:44:47.279639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-12-09 11:44:47.279650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-12-09 11:44:47.279984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-12-09 11:44:47.279995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.280292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.280302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.280675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.280685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.280992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.281001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.281341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.281551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.281561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.281786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.281795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.282274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.282284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.282577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.282594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.282897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.282906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.283230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.283240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.283530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.283540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.283830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.283840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.284157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.284167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.284463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.284473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.284792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.284980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.284991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.285332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.285343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.285530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.285540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.285826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.285837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.286176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.286186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.286490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.286501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.286809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.286819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.286989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.287002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.287382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.287392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.287704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.287714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.287997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.288007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.288198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.288209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.288550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-12-09 11:44:47.288560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-12-09 11:44:47.288846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.289140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.289472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.289753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.289764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.290054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.290375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.290385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.290676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.290686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.291017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.291027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.291450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.291460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.291644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.291655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.291778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.291787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.292110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.292121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.292430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.292440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.292733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.292742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.293067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.293077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.293278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.293288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.293596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.293606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.293886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.293903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.294108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.294118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.294389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.294399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.294723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.294732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.295021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.295377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.295386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.295672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.296028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.296038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.296358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.296368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.296579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.296588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.296756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.296766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-12-09 11:44:47.297114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-12-09 11:44:47.297124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.297396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.297406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.297704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.298021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.298031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.298313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.298323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.298653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.298664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.298984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.299156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.299168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.299468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.299477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.299778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.299789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.300092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.300102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.300416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.300426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.300773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.300783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.301071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.301082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.301448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.301457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.301754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.301763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.302084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.302094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.302387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.302397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.302588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.302598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.302864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.302874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.303160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.303489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.303499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.303804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.303814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.304124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.304134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.304441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.304451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.304746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.304756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.304928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.304937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.305283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.305586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-12-09 11:44:47.305596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-12-09 11:44:47.305907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.305924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.306232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.306243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.306544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.306554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.306866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.306876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.307163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.307175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.307447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.307457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.307736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.307746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.308033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.308043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.308327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.308343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.308675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.308685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.308991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.309001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.309301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.309311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.309598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.309609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.309899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.309908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.310229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.310239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.310524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.310534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.310847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.310858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.311214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.311528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.311537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.311824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.311833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.312135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.312145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.312449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.312459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.312764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.312775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.313038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.313370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.313588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.313598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.313795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-12-09 11:44:47.313805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-12-09 11:44:47.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.314157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.314441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.314452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.314755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.314765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.314956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.314967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.315133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.315145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.315488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.315497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.315873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.315883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.316181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.316191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.316501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.316510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.316850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.316861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.317174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.317184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.317429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.317730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.317740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.318040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.318050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.318209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.318219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.318487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.318497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.318835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.318846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.319145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.319158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.319558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.319568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.319870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.319881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.320218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.320228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.320543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.320553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.320833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.320843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-12-09 11:44:47.321148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-12-09 11:44:47.321376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.321691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.321700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.321996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.322005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.322409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.322419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.322723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.322733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.322938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.322947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.323242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.323252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.323584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.323594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.323892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.323902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.324196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.324206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.324402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.324411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.324720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.324730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.324898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.324909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.325246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.325256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.325568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.325577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.325929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.325938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.326109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.326120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.326396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.326406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.326731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.326740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.327352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-12-09 11:44:47.327361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-12-09 11:44:47.327654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.327948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.328262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.328272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.328459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.328469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.328746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.328758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.328931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.328942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.329312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.329323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.329627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.329637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.329946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.329957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.330179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.330506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.330515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.330806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.330817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.331000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.331018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.331349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.331359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.331642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.331652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.331957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.331966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.332131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-12-09 11:44:47.332142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-12-09 11:44:47.332497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.332506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.332720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.332729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.333080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.333090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.333453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.333462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.333756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.334063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.334081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.334395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.334412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.334724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.334734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.335049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.335059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.335386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.335396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.335691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.336009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.336023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.336398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.336407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.336738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.336748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.337021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.337031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.337303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.337313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.337494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.337817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.337827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-12-09 11:44:47.338199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-12-09 11:44:47.338210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.338396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.338406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.338609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.338619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.338931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.338940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.339256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.339266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.339566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.339872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.339882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.340265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.340275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.340404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.340414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.340711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.340720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.341032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.341043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.341368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.341742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.341753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.342053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.342064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.342377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.342387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.342681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.342691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.342959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.342969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.343281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.343293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.343580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.343591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.343908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.343918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.344230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.344240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.344538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.344853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.344863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-12-09 11:44:47.345161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-12-09 11:44:47.345171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.345551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.345561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.345826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.345835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.346116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.346126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.346419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.346429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.346748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.347069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.347079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.347402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.347411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.347718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.347728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.348037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.348047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.348367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.348377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.348718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.348728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.349019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.349230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.349240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.349544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.349553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.349832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.349841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.350141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.350151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.350334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.350345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.350768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.350778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.351077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.351087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.351411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.351421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.351657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.351667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.351984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.351994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.352292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.352302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.352598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.352907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-12-09 11:44:47.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-12-09 11:44:47.353225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.353536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.353805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.353815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.354150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.354160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.354506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.354797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.354806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.355150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.355160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.355561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.355570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.355852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.356183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.356193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.356532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.356541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.356907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.356917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.357228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.357238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.357567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.357577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.357870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.357880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.358188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-12-09 11:44:47.358416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-12-09 11:44:47.358426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.358579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.358588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.358915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.358925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.359243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.359253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.359571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.359580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.359754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.359764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.360071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.360082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.360308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.360318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.360594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.360605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.360768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.360779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.361085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.361094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.361392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.361401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.361738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.361747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.362039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.362049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.362321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.362331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.362638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.362939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.362948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.363088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.363098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.363382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.363392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.363729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.363738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.364022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.364032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.364239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.364249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.364561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.364571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.364878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.364887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.365239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-12-09 11:44:47.365249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-12-09 11:44:47.365450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.365459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.365617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.365626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.365919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.365929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.366110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.366121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.366314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.366324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.366596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.366606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.366907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.366916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.367132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.367144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.367468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.367477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.367664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.367675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.367980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.367989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.368295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.368305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.368476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.368487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.368766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.368776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.369076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.369085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.369397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.369407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.369638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.369649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.369951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.369961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.370306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.370316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.370693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.370704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.371017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.371028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.371344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.371354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.371744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.371753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-12-09 11:44:47.372379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-12-09 11:44:47.372389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.372674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.372689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.373075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.373401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.373410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.373809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.374112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.374122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.374430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.374440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.374733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.374742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.375034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.375044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.375353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.375362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.375665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.375977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.375986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.376333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.376344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.376688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.376697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.377001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.377019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-12-09 11:44:47.377321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-12-09 11:44:47.377331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.377636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.377952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.377961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.378147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.378157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.378379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.378389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.378693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.378702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.379096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.379298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.379623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.379951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.379961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.380270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.380280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.380569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.380883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.380894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.381224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.381234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.381614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.381623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.381930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.381939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.382255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.382266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.382455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.382466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.382746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.382756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.383075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.383084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.383411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.383717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.383727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.384025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.384036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.384312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.384322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.384633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.384643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.384834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.384843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.385033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.385276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.385286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.385508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-12-09 11:44:47.385519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-12-09 11:44:47.385833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.385842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.386149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.386159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.386495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.386506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.386689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.386699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.387014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.387024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.387341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.387350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.387728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.388065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.388386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.388675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.388685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.389003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.389018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.389301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.389310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.389687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.389697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.390037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.390047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.390338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.390348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.390690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.390699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.391022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.391343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.391353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.391637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.391647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.391945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.392270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.392280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.392586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.392596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.392922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.392932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.393256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.393266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.393436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.393447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.393630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.393639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.393962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.393972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.394186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.394197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.394405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.394415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.394784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.394793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.395044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.395053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.395375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.395384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.395760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.395770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.395966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-12-09 11:44:47.395977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-12-09 11:44:47.396298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.396309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.396619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.396628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.396947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.396957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.397228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.397238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.397512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.397522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.397862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.398148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.398158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.398468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.398479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.398790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.398800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.399128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.399138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.399351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.399361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.399677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.399687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.400013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.400024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.400364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.400374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.400709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.400719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.400763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.400772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.401073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.401083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.401289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-12-09 11:44:47.401299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-12-09 11:44:47.401365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.401375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.401576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.401586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.401873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.401883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.402168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.402179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.402490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.402500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.402766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.402776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.403072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.403082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.403366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.403378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.403683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.403995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.404005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.404309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.404318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.404604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.404620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.404937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.404947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.405258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.405268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.405612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.405914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.405924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.406238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.406248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.406563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.406572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.406914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.407141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.407151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.407472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-12-09 11:44:47.407481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-12-09 11:44:47.407877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.407887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.408214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.408224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.408528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.408538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.408731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.408741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.409061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.409070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.409378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.409389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.409679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.409689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.409997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.410007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.410193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.410204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.410540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.410549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.410856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.411064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.411075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.411435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.411444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.411644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.411653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.411968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.411978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.412362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.412373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.412718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.412728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.413002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.413016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.413210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.413220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.413402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.413413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.413743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.413752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.414045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.414055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-12-09 11:44:47.414418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-12-09 11:44:47.414428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.414719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.414730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.415033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.415043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.415362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.415371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.415719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.415731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.416034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.416362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.416372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.416599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.416608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.416909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.416919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.417229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.417239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.417545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.417555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.417843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.417854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.418170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.418181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.418582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.418592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.418903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.418914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.419117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.419127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-12-09 11:44:47.419324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-12-09 11:44:47.419335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Read completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.429 starting I/O failed 00:29:55.429 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Read completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Read completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Read completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Read completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Write completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 Read completed with error (sct=0, sc=8) 00:29:55.430 starting I/O failed 00:29:55.430 [2024-12-09 11:44:47.419617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:55.430 [2024-12-09 11:44:47.419945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.419961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.420153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.420166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.420600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.420641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.421218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.421258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.421456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.421469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.421805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.421815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.422169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.422180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.422502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.422776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.422787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.423164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.423175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.423459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.423478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.423786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.423859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.423870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.424068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.424079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.424401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.424411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.424590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.424600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.424925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.424935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.425238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.425248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.425633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.425642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-12-09 11:44:47.425947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-12-09 11:44:47.425957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.426244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.426254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.426444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.426455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.426784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.426794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.427105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.427115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.427488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.427498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.427810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.427820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.428002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.428297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.428308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.428496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.428828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.428839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.429141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.429152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.429449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.429812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.429821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.430054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.430410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.430423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.430719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.430730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.430939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.430949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.431007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-12-09 11:44:47.431021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-12-09 11:44:47.431187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.431198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.431534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.431544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.431841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.432170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.432180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.432512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.432522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.432851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.432861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.433273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.433550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.433560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.433948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.433958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.434153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.434163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.434252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.434262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.434602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.434612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.434935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.434944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.435283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.435515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.435525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.435811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.435821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.436035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.436046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.436406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.436597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.436607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.436941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.437276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.437287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.437599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.437609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.437884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.432 [2024-12-09 11:44:47.437894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.432 qpair failed and we were unable to recover it. 00:29:55.432 [2024-12-09 11:44:47.438239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.438249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.438644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.438654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.438960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.438970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.439280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.439290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.439466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.439476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.439796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.439807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.440135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.440146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.440348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.440358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.440762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.440772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.441086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.441096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.441395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.441405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.441585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.441596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.441904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.441914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.442231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.442241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.442560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.442571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.442888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.442897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.443225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.443236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.443410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.443419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.443779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.443789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.444102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.444112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.444447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.444457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.433 [2024-12-09 11:44:47.444775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.433 [2024-12-09 11:44:47.444785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.433 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.444972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.444983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.445304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.445314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.445617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.445627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.445967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.446092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.446102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.446435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.446444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.446749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.446760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.447092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.447102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.447423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.447434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.447761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.447771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.447882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.447891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.448409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.448447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.448782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.448794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.449221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.449258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.449692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.449704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.450261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.450298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.450610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.450862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.450872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.451368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.451405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.451783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.434 [2024-12-09 11:44:47.451796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.434 qpair failed and we were unable to recover it. 00:29:55.434 [2024-12-09 11:44:47.452002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.452017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.452383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.452392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.452598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.452608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.452785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.452795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.453004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.453020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.453359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.453370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.453558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.453568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.453904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.453914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.454254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.454264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.454674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.454685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.455002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.455014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.455318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.455328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.455667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.455679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.455879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.455889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.456106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.456117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.456410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.456419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.456720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.456731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.457061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.435 [2024-12-09 11:44:47.457071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.435 qpair failed and we were unable to recover it. 00:29:55.435 [2024-12-09 11:44:47.457381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.457391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.457757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.457767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.457945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.457954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.458255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.458266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.458579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.458588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.458836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.458847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.459193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.459203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.459522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.459531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.459851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.459861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.460179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.460190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.460393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.460406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.460522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.460532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.460779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.460959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.460968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.461339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.461668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.461678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.461860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.461870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.462248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.462258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.462443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.462453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.462748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.462758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.462968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.462978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.463308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.463318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.463639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.463649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.463956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.463967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.464185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.464196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.464531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.464542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.464731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.464742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.465036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.465047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.465440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.465450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.465636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.436 [2024-12-09 11:44:47.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.436 qpair failed and we were unable to recover it. 00:29:55.436 [2024-12-09 11:44:47.465985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.465995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.466328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.466338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.466524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.466533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.466850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.466860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.467059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.467070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.467284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.467294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.467508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.467517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.467782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.467791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.468002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.468021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.468278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.468558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.468567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.468879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.468889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.469225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.469235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.469564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.469574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.469893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.469903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.470190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.470200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.470520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.470530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.470851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.470860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.471038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.471048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.471256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.471266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.471591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.471601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.471647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.471657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.472001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.472019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.472263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.472273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.472640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.472650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.472973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.472983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.473281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.473291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.473607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.473616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.473916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.473926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.474254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.474264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.474461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.474848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.474861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.475158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.475168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.475303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.475313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.475674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.475684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.475876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.475887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.476227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.476237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.476561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.437 qpair failed and we were unable to recover it. 00:29:55.437 [2024-12-09 11:44:47.476875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.437 [2024-12-09 11:44:47.476884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.477188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.477198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.477516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.477527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.477740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.477750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.478111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.478311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.478321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.478653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.478663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.478961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.478972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.479272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.479282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.479575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.479586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.479740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.479751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.480057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.480067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.480359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.480369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.480689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.480699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.480905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.480914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.481094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.481498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.481508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.481792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.481802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.482145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.482155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.482344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.482354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.482662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.482672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.482988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.482999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.483357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.483367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.483629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.483639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.483958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.484310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.484321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.484619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.484630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.484934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.484944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.485354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.485365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.485676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.485686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.486024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.486034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.486380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.486389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.486697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.486707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.438 [2024-12-09 11:44:47.486896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.438 [2024-12-09 11:44:47.486912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.438 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.487235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.487245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.487625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.487942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.487951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.488340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.488350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.488670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.488893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.488903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.489084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.489098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.489393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.489403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.489689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.489699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.490021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.490031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.490316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.490326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.490633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.490643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.490926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.490935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.491248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.491259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.491575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.491586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.491897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.491906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.492242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.492548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.492558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.492827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.492837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.493129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.493139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.493445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.493455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.493732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.493742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.493944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.493955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.494296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.494616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.494626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.494923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.494934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.495231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.495242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.495559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.495900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.495911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.496225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.496236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.496530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.496854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.496865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.497077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.497088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.497438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.497448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.497731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.497741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.498066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.498076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.498377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.498388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.498727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.498736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.439 [2024-12-09 11:44:47.499006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.439 [2024-12-09 11:44:47.499019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.439 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.499345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.499680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.499690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.500033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.500043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.500357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.500367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.500679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.500978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.500987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.501288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.501298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.501601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.501617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.502013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.502023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.502337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.502346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.502633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.502643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.502948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.502958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.503276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.503286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.503612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.503621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.503932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.503942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.504278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.504588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.504598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.504873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.504884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.505180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.505191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.505372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.505381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.505547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.505557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.505842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.505852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.506162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.506172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.506477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.506728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.506738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.507032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.507042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.507356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.507365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.507580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.507590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.507809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.507819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.508120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.508130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.508428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.508438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.508756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.508765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.508998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.509009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.509208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.509219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.509554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.509564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.509867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.509877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.510182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.510501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.440 [2024-12-09 11:44:47.510511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.440 qpair failed and we were unable to recover it. 00:29:55.440 [2024-12-09 11:44:47.510801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.510810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.511129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.511139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.511489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.511501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.511799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.511809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.512133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.512143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.512454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.512463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.512766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.513095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.513105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.513484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.513494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.513761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.513770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.513968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.513978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.514294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.514305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.514601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.514611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.514906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.514916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.515213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.515224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.515544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.515554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.515897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.515908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.516294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.516304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.516504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.516514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.516838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.516849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.517156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.517166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.517489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.517499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.517712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.517721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.518044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.518055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.518256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.518266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.518569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.518579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.518738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.518749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.519063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.519073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.519385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.519395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.519721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.519731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.519960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.520226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.520237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.520570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.520580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.520863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.520873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.521184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.521194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.521486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.521818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.521828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.522126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.522137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.441 qpair failed and we were unable to recover it. 00:29:55.441 [2024-12-09 11:44:47.522413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.441 [2024-12-09 11:44:47.522423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.522605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.522616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.522834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.522844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.523138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.523149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.523462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.523474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.523769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.524078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.524088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.524389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.524399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.524718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.524728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.524952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.524962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.525263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.525274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.525578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.525587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.525900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.526184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.526489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.526500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.526847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.526857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.527164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.527175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.527470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.527480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.527785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.527795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.528179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.528189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.528403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.528412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.528689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.528699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.529003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.529023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.529335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.529346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.529648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.529657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.529945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.529954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.530124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.530135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.530407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.530768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.530778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.531077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.531087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.531400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.531411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.531698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.531709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.532014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.532025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.532400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.532410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.532735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.442 [2024-12-09 11:44:47.532744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.442 qpair failed and we were unable to recover it. 00:29:55.442 [2024-12-09 11:44:47.533093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.533103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.533275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.533558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.533568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.533769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.533779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.534041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.534052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.534206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.534434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.534444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.534794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.534804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.534995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.535006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.535326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.535339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.535674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.535685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.536038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.536410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.536420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.536736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.536746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.537082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.537092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.537393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.537404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.537706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.537716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.537954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.537963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.538268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.538278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.538462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.538472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.538780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.538790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.539099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.539109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.539489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.539499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.539733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.539743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.539893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.539904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.540254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.540556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.540566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.540881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.540890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.541136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.541146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.541464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.541474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.541742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.541752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.542093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.542104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.542439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.542449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.542771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.542781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.543116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.543126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.543447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.543457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.543763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.543774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.543964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.543976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.443 qpair failed and we were unable to recover it. 00:29:55.443 [2024-12-09 11:44:47.544281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.443 [2024-12-09 11:44:47.544293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.544475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.544486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.544808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.544819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.545044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.545369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.545378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.545755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.545765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.546071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.546081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.546285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.546296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.546619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.546629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.546969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.547256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.547266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.547577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.547591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.547962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.547972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.548263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.548274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.548490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.548501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.548760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.549063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.549073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.549393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.549403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.549709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.549718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.550032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.550043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.550358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.550368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.550663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.550672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.550866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.550877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.551236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.551246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.551575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.551586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.551929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.551939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.552233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.552243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.552554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.552564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.552852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.553181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.553191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.553501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.553511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.553811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.553821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.554118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.444 [2024-12-09 11:44:47.554451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.444 [2024-12-09 11:44:47.554461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.444 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.554766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.554777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.555072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.555084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.555402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.555413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.555709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.555720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.556316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.556326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.556605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.556615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.556905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.556915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.557297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.557309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.557625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.557975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.557986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.558290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.558300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.558618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.558948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.558959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.559267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.559277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.559569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.559579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.559889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.559899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.560197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.560210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.560496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.560507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.560827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.560838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.561034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.561355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.561364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-12-09 11:44:47.561738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-12-09 11:44:47.561748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.561958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.561967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.562144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.562156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.562465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.562476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.562784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.562794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.563174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.563185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.563502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.563512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.563795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.563806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.564130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.564450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.564460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.564778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.564788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.565093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.565103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.565378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.565387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.565766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.565776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.565973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.565983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.566241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.566251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.566576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.566878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.566888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.567166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.567176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.567492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.567503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.567882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.567893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.568190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.568201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.568389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.568399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.568679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.568691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.568879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.568891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.569181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.569191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.569503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.569820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.569829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.570153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.570339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.570630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.570640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.570956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.570966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.571163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.571417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.571428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.571724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.571734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.572044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.572056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.572323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.572334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.572676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.572689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.573001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-12-09 11:44:47.573014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-12-09 11:44:47.573345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.573357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.573706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.573991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.574000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.574301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.574311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.574591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.574601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.574935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.574944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.575244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.575254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.575556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.575565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.575883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.575892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.576128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.576138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.576422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.576432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.576638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.576647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.576848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.576858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.577136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.577146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.577463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.577473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.577777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.577787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.578105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.578400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.578410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.578707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.578716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.579002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.579029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.579349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.579359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.579647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.579657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.579957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.579967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.580281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.580291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.580605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.580615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.580948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.581331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.581341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.581702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.581712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.581898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.581910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.582209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.582219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.582516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.582526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.582838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.582848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.583033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.583045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.583238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.583248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.583557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.583567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.583751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.583761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.584116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.584130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.584465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.584476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.724 qpair failed and we were unable to recover it. 00:29:55.724 [2024-12-09 11:44:47.584799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.724 [2024-12-09 11:44:47.584808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.585131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.585141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.585442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.585452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.585611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.585622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.585981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.585991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.586281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.586644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.586654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.587036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.587346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.587356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.587659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.587669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.587993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.588003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.588338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.588348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.588648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.588658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.588871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.588880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.589097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.589107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.589298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.589308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.589524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.589535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.589855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.589865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.590174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.590185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.590481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.590491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.590788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.590798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.591105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.591115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.591410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.591420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.591623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.591632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.591925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.591935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.592261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.592576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.592586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.592923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.592934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.593253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.593262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.593564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.593884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.593894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.594224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.594234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.594556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.594567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.594877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.594888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.595213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.595223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.595546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.595556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.595896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.595906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.596227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.596237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.596400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.725 [2024-12-09 11:44:47.596413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.725 qpair failed and we were unable to recover it. 00:29:55.725 [2024-12-09 11:44:47.596726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.596736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.597075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.597092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.597285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.597296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.597627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.597636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.597961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.597970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.598185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.598196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.598518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.598528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.598841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.598851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.599066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.599076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.599269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.599279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.599604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.599613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.599928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.599938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.600250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.600260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.600603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.600613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.600994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.601004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.601326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.601336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.601627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.601637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.601961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.601971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.602262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.602580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.602590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.602877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.602888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.603180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.603191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.603547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.603556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.603865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.603875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.604189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.604199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.604408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.604418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.604738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.604747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.605038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.605049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.605376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.605385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.605679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.605688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.605984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.606286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.726 [2024-12-09 11:44:47.606296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.726 qpair failed and we were unable to recover it. 00:29:55.726 [2024-12-09 11:44:47.606585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.606595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.606886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.606896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.607181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.607191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.607392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.607403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.607742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.607751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.608057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.608067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.608384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.608394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.608700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.608715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.609055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.609065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.609422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.609763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.609772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.610084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.610094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.610259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.610270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.610632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.610642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.610962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.610973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.611276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.611286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.611666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.611676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.612007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.612306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.612315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.612625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.612634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.612962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.612972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.613356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.613367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.613678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.614016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.614026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.614330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.614346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.614659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.614668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.614854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.614865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.615210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.615220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.615527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.615537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.615869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.615878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.616164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.616174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.616489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.616499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.616828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.616839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.617180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.617190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.617442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.617452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.617658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.617783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.617792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.618096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.727 [2024-12-09 11:44:47.618106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.727 qpair failed and we were unable to recover it. 00:29:55.727 [2024-12-09 11:44:47.618503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.618512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.618839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.618849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.619132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.619142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.619450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.619460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.619632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.619643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.619953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.619962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.620248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.620258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.620616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.620627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.620938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.620949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.621249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.621534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.621544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.621831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.621841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.622153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.622163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.622472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.622775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.622785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.623084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.623094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.623263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.623274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.623462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.623472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.623878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.623888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.624186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.624203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.624515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.624524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.624836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.624845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.625168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.625178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.625470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.625480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.625773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.625782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.626105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.626115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.626423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.626433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.626717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.626727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.627033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.627044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.627367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.627376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.627679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.627689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.628049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.628059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.628284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.628608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.628619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.628996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.629006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.629338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.629349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.629502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.629515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.629716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.728 [2024-12-09 11:44:47.629726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.728 qpair failed and we were unable to recover it. 00:29:55.728 [2024-12-09 11:44:47.629999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.630012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.630312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.630322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.630628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.630638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.630959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.630970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.631320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.631330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.631652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.631663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.631966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.631976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.632164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.632175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.632410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.632421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.632719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.632730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.633035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.633364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.633374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.633684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.633694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.634007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.634020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.634243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.634253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.634564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.634573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.634821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.634831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.635130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.635348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.635357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.635655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.635665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.635975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.635984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.636362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.636372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.636671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.636681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.636983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.636993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.637276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.637286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.637579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.637589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.637776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.637785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.638115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.638125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.638444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.638459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.638647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.638658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.638948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.639255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.639266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.639573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.639582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.639888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.639897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.640200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.640515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.640837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.640847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.641144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.641162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.729 qpair failed and we were unable to recover it. 00:29:55.729 [2024-12-09 11:44:47.641474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.729 [2024-12-09 11:44:47.641486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.641809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.641820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.642512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.642521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.642819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.642830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.643201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.643590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.643600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.643903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.643913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.644230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.644240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.644527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.644538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.644823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.644832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.645207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.645217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.645528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.645538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.645848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.645858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.646193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.646203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.646577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.646937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.646946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.647260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.647271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.647623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.647633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.647927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.647936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.648258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.648268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.648562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.648571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.648864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.648881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.649225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.649235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.649431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.649442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.649778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.649787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.649971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.649982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.650356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.650367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.650673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.650683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.651039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.651049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.651355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.651365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.651555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.651565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.651925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.651934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.652258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.652269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.652410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.652419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.652728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.652738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.653069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.653468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.730 [2024-12-09 11:44:47.653478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.730 qpair failed and we were unable to recover it. 00:29:55.730 [2024-12-09 11:44:47.653811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.653821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.654186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.654196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.654498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.654511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.654735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.654745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.655070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.655080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.655396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.655405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.655736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.656027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.656037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.656270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.656280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.656585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.656595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.656904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.656913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.657230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.657240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.657551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.657560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.657841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.657851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.658185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.658195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.658535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.658544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.658702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.658713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.658994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.659004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.659362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.659372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.659657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.659672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.659971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.659981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.660294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.660305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.660602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.660611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.660887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.660897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.661224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.661234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.661542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.661887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.661897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.662117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.662128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.662460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.662469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.662771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.662780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.663127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.663289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.663660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.663670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.731 [2024-12-09 11:44:47.663968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.731 [2024-12-09 11:44:47.663978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.731 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.664297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.664620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.664631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.664867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.664877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.665097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.665107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.665491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.665501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.665810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.666104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.666114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.666411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.666420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.666657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.666669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.666992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.667003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.667305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.667315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.667504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.668272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.668282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.668597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.668608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.668784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.668794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.669017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.669027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.669342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.669352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.669681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.669691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.669975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.669985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.670281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.670292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.670627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.670996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.671399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.671409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.671720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.671730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.672041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.672051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.672411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.672421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.672726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.672736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.673034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.673044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.673351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.673361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.673665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.673675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.673835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.673845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.674283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.674293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.674579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.674595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.674925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.732 [2024-12-09 11:44:47.674935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.732 qpair failed and we were unable to recover it. 00:29:55.732 [2024-12-09 11:44:47.675128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.675139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.675458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.675467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.675780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.676099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.676110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.676403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.676414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.676696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.676706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.677090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.677100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.677300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.677310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.677731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.677741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.678027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.678037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.678217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.678227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.678572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.678581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.678759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.678769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.679076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.679088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.679400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.679410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.679761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.679771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.680078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.680088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.680418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.680748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.680758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.681076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.681086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.681304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.681314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.681643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.681653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.681972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.681981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.682184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.682563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.682573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.682861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.682877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.733 [2024-12-09 11:44:47.683060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.733 [2024-12-09 11:44:47.683070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.733 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.683388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.683398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.683739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.683749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.684094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.684404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.684413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.684630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.684640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.684836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.684846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.685065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.685076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.685385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.685394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.685704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.686040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.686051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.686264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.686274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.686459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.686470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.686803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.687130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.687141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.687510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.687519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.687853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.687863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.688162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.688172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.688499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.688509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.688726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.688736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.688923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.688933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.689260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.689548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.689563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.689909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.689919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.690224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.690236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.690418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.734 [2024-12-09 11:44:47.690429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.734 qpair failed and we were unable to recover it. 00:29:55.734 [2024-12-09 11:44:47.690779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.690789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.691092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.691105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.691388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.691398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.691687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.692019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.692029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.692428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.692438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.692754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.692764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.693095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.693105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.693290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.693300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.693648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.693658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.694004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.694024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.694342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.694351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.694660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.694670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.695018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.695367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.695378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.695706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.695716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.696026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.696036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.696372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.696668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.696678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.696899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.696909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.697208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.697219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.697485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.697495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.697813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.697823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.698140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.698150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.698466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.698482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.735 [2024-12-09 11:44:47.698820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.735 [2024-12-09 11:44:47.698829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.735 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.699045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.699056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.699237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.699247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.699439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.699756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.699766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.700062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.700072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.700362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.700579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.700589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.700922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.700933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.701238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.701250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.701445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.701831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.701840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.702132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.702143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.702470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.702480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.702801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.702810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.703120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.703130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.703446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.703778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.703789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.704101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.704111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.704405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.704416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.704816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.704825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.705112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.705122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.705440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.705450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.705741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.705956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.705965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.736 [2024-12-09 11:44:47.706201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.736 [2024-12-09 11:44:47.706211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.736 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.706500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.706509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.706817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.706827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.707017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.707027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.707382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.707392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.707720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.707729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.708009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.708022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.708362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.708372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.708576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.708586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.708811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.708821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.709034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.709044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.709337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.709347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.709683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.709693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.709990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.710301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.710312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.710613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.710623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.710982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.710992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.711191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.711201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.711540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.711551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.711737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.711749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.737 [2024-12-09 11:44:47.711910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.737 [2024-12-09 11:44:47.711921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.737 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.712256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.712267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.712572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.712583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.712907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.712917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.713098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.713109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.713430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.713747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.713910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.713922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.714240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.714251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.714595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.714605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.714827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.715159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.715171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.715485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.715495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.715809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.715819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.716485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.716494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.716744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.716754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.717079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.717089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.717264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.717653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.717662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.717911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.717921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.718105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.718116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.718431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.718441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.718733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.718743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.719046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.719439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.719738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-09 11:44:47.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.738 qpair failed and we were unable to recover it. 00:29:55.738 [2024-12-09 11:44:47.720035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.720045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.720312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.720322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.720488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.720497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.720837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.720848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.721155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.721166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.721346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.721357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.721692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.721702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.722027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.722037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.722357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.722375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.722573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.722582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.722989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.722999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.723188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.723198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.723617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.723829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.723839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.724123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.724134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.724449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.724459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.724727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.724736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.724915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.724925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.725256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.725266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.725452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.725462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.725829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.725839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.726139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.726149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.726498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.726508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.726826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.726836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.727127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.727139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.727447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.727652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.727662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.727853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.727863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.728110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.728120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.728411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.728421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.739 [2024-12-09 11:44:47.728746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-09 11:44:47.728756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.739 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.729056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.729066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.729351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.729360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.729530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.729539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.729837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.729847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.730175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.730186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.730483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.730501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.730682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.730691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.730922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.730932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.731256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.731266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.731519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.731698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.731708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.731920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.731930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.732099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.732109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.732408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.732418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.732726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.732736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.733103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.733113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.733404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.733415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.733728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.733738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.734041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.734052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.734225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.734237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.734574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.734584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.734907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.734917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.735202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.735212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.735514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.735524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.735857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.736191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.736202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.736541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.736550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.736865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.737256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.737266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.737571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.737891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.737901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.737958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.738126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.738453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.738466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.738752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.738761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.739080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.739090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.739424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.740 qpair failed and we were unable to recover it. 00:29:55.740 [2024-12-09 11:44:47.739706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-09 11:44:47.739716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.739954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.739964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.740311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.740321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.740662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.740672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.740834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.741209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.741219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.741415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.741424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.741646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.741656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.741935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.741945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.742269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.742279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.742473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.742641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.742650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.743050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.743060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.743377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.743387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.743750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.743760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.744081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.744091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.744408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.744418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.744719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.744730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.745048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.745058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.745281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.745290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.745553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.745890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.745900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.746188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.746198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.746526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.746536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.746738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.746748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.747070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.747080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.747414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.747712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.747722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.748115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.748125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.748443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.748452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.748746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.748755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.749085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.749095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.749413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.749423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.749711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.749720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.750030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.750048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.750112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.750123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.750501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.741 [2024-12-09 11:44:47.750513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.741 qpair failed and we were unable to recover it. 00:29:55.741 [2024-12-09 11:44:47.750830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.750839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.751019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.751030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.751211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.751221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.751404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.751413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.751764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.751774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.752112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.752122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.752442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.752738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.752748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.753075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.753085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.753249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.753259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.753536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.753545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.753861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.754225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.754236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.754571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.754582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.754768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.754779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.755094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.755104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.755432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.755442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.755644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.755654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.756040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.756375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.756385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.756744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.756754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.756845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.757153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.757496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.757744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.742 qpair failed and we were unable to recover it. 00:29:55.742 [2024-12-09 11:44:47.758079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.742 [2024-12-09 11:44:47.758089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.758429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.758439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.758629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.758639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.758931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.758941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.759126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.759137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.759348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.759358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.759676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.759998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.760021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.760322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.760332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.760630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.760645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.760950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.760959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.761330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.761340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.761642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.761652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.761970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.761979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.762271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.762284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.762605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.762615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.762989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.762999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.763350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.763360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.763666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.763676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-12-09 11:44:47.764004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.743 [2024-12-09 11:44:47.764021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.764391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.764400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.764701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.764711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.764917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.764927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.765261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.765271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.765557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.765566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.765951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.765960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.766241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.766251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.766582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.766592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.766905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.767105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.767115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.767385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.767395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.767721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.767732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.768017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.768028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.768319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.768328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.768537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.768547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.768749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.768759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.769060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.769070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.769355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.769365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.769673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.769683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.770067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.770079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.770393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.770696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.770706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.771029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.771039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.771400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.771411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.771760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-12-09 11:44:47.772085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.744 [2024-12-09 11:44:47.772095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.772284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.772564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.772574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.772730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.772740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.773069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.773080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.773410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.773420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.773730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.773740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.774035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.774045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.774354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.774363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.774576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.774589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.774904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.774914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.775235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.775246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.775557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.775567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.775965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.775975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.776267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.776277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.776471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.776483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.776809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.777041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.777052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.777348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.777358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.777646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.777656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.777849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.777858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.778134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.778144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.778493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.778502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.778799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.778810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.779129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.745 [2024-12-09 11:44:47.779139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-12-09 11:44:47.779473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.779483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.779798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.779807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.779981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.780375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.780691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.780701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.781007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.781020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.781316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.781326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.781617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.781626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.781928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.781939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.782121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.782133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.782424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.782434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.782734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.782746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.783055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.783065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.783365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.783375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.783736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.783747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.784049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.784059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.784397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.784707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.784717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.785069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.785080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.785162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.785172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.785465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.785474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.785662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.785672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.786017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.786230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.786239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.786515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.786829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.746 [2024-12-09 11:44:47.786838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.746 qpair failed and we were unable to recover it. 00:29:55.746 [2024-12-09 11:44:47.787135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.787146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.787448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.787458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.787735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.787745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.787932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.787942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.788143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.788153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.788448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.788458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.788761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.788770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.789079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.789089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.789399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.789409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.789573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.789585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.789877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.790097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.790418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.790428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.790736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.790745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.791047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.791057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.791399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.791410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.791632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.791642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.791970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.791980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.792272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.792282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.792590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.792601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.792793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.792803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.793123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.793133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.793457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.793466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.793777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.793786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.794075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.794085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.794425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.747 qpair failed and we were unable to recover it. 00:29:55.747 [2024-12-09 11:44:47.794745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.747 [2024-12-09 11:44:47.794756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.794918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.794928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.795208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.795219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.795562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.795572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.795865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.795875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.796182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.796192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.796482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.796491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.796794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.796804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.797086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.797096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.797473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.797482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.797826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.797836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.798151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.798161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.798467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.798476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.798798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.798808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.799125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.799135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.799438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.799448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.799759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.799768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.800424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.800719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.800729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.801054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.801064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.801388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.801692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.801966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.801976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.802279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.802572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.802583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.802919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.802930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.803238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.803249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.803525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.803536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.803857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.803868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.804187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.804197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.804497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.804507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.804826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.804835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.748 qpair failed and we were unable to recover it. 00:29:55.748 [2024-12-09 11:44:47.805146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.748 [2024-12-09 11:44:47.805156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.805453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.805462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.805845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.806026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.806036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.806355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.806365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.806661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.806951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.806963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.807240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.807250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.807357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.807367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.807642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.807931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.807941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.808285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.808589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.808598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.808918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.808927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.809167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.809472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.809482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.809807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.809817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.810149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.810159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.810447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.810458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.810671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.810680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.810847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.810858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.811198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.811208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.811493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.811504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.811805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.812202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.812212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.812479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.812489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.812791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.813125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.813135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.813414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.813424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.813713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.813730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.749 qpair failed and we were unable to recover it. 00:29:55.749 [2024-12-09 11:44:47.813937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.749 [2024-12-09 11:44:47.813946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.814285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.814295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.814608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.814927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.814937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.815254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.815264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.815657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.815967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.815976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.816341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.816351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.816725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.816735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.817044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.817054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.817401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.817675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.817685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.818033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.818043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.818277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.818287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.818570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.818580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.818893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.818903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.819143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.819155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.819487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.819496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.819885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.819894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.820203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.820213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.820584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.820593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.820892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.821229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.821239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.821525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.821540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.821820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.821829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.822161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.822172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.822469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.822478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.822653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.822861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.822871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.823064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.823075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.823291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.823575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.823584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.823746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.823757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.750 [2024-12-09 11:44:47.824078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.750 [2024-12-09 11:44:47.824088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.750 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.824406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.824416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.824794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.824803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.825106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.825116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.825453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.825463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.825636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.825937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.825948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.826262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.826272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.826562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.826572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.826900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.826910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.827181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.827193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.827488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.827498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.827785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.827795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.828169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.828179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.828468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.828479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.828817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.828827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.829190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.829201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.829511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.829521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.829830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.829840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.830158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.830168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.830529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.830539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.830847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.830857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.831168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.831178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.831561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.831881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.831890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.832092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.751 [2024-12-09 11:44:47.832102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.751 qpair failed and we were unable to recover it. 00:29:55.751 [2024-12-09 11:44:47.832421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.832431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.832803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.832814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.833127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.833137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.833290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.833300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.833521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.833532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.833820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.833830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.834157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.834168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.834493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.834502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.834849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.834859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.835144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.835155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.835460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.835478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.835791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.836109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.836120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.836433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.836443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.836745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.836755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.836954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.836964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.837249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.837259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.837546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.837556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.837866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.838182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.838192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.838512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.838522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.838845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.838855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.839085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.839096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.839386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.839395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.839695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.839705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.839882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.752 [2024-12-09 11:44:47.839891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.752 qpair failed and we were unable to recover it. 00:29:55.752 [2024-12-09 11:44:47.840222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.840547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.840557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.840803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.840813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.841086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.841096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.841399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.841416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.841730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.841740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.842048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.842058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.842388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.842398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.842688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.842699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.842869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.842880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.843177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.843187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.843496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.843508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.843857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.843866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.844184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.844194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.844551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.844561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.844903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.844914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.845227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.845237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.845542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.845552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.845721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.845732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.846057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.846068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.846285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.846295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.846512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.846522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.846647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.846656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.846922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.846932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.847234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.753 [2024-12-09 11:44:47.847244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.753 qpair failed and we were unable to recover it. 00:29:55.753 [2024-12-09 11:44:47.847534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.847544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.847858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.847868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.848182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.848194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.848491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.848501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.848791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.848801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.849108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.849118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.849434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.849443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.849826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.849836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.850137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.850147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.850454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.850464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.850786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.850974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.850984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.851333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.851343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.851537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.851548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.851742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.852059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.852069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.852371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.852381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.852665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.852674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.852958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.852967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.853269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.853279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.853664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.853675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.854007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.854031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.854396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.854406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.854746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.854756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.855057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-12-09 11:44:47.855067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-12-09 11:44:47.855374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.855384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.855719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.855731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.856078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.856088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.856402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.856412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.856742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.856753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.857069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.857079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.857280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.857289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.857499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.857510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.857692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.857703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.857963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.857973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.858270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.858280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.858598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.858608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.858906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.858916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.859628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.859637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.859835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.859845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.860054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.860064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.860379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.860388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.860761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.860771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.861184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.861195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.861511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.861521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.861731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-12-09 11:44:47.861740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-12-09 11:44:47.862070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.862080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.862460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.862470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.862650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.862661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.863043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.863053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.863381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.863391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.863733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.863742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.864051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.864061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.864378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.864388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.864725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.864735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.864919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.864930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.865231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.865242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.865458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.865468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.865772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-12-09 11:44:47.865971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-12-09 11:44:47.865982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.866303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.866315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.866645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.866657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.866875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.866887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.867214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.867502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.867511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.867823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.867835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.868232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.868243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.868586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.868596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.868826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.868836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.869131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.869141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.869255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.869265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.869454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.869465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.869808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.870028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.870039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.034 qpair failed and we were unable to recover it. 00:29:56.034 [2024-12-09 11:44:47.870351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.034 [2024-12-09 11:44:47.870361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.870546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.870555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.870884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.870894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.871225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.871236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.871585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.871964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.871974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.872292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.872302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.872610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.872620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.872907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.872917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.873229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.873239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.873508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.873518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.873711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.873722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.874080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.874090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.874380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.874390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.874704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.874714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.874922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.875266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.875276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.875588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.875597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.875668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.875677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.875949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.875958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.876199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.876210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.876563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.876573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.876880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.876891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.877234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.877244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.877559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.877749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.877759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.878074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.878085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.878395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.878404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.878721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.878730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.879029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.879040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.879325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.879335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.879619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.879634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.879841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.879851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.880057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.880068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.880404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.880413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.880723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.880733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.881088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.881098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.881404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.881415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.035 qpair failed and we were unable to recover it. 00:29:56.035 [2024-12-09 11:44:47.881613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.035 [2024-12-09 11:44:47.881622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.881942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.881951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.882240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.882250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.882540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.882549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.882928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.882937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.883209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.883219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.883515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.883739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.883749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.884070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.884081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.884388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.884397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.884676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.884685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.884981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.884990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.885203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.885536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.885546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.885850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.885861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.886203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.886212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.886592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.886603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.886944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.886954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.887237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.887255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.887647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.887656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.887960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.887971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.888328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.888339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.888657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.888667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.888862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.888873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.889194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.889204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.889538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.889547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.889896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.890229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.890239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.890479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.890489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.890768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.890778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.891111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.891121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.891424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.891722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.891732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.891921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.891934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.892233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.892243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.892592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.892603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.892948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.893003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-12-09 11:44:47.893204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-12-09 11:44:47.893215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.893534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.893543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.893858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.893868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.894207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.894217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.894519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.894835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.894845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.895156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.895166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.895440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.895449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.895812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.895822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.896108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.896118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.896435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.896444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.896733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.896744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.896929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.896939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.897224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.897234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.897548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.897559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.897859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.897869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.898171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.898181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.898538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.898548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.898850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.898860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.899191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.899201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.899497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.899507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.899820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.899830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.900153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.900163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.900497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.900506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.900886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.900895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.901191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.901201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.901527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.901537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.901823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.902124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.902135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.902309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.902319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.902651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.902661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.903258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.903269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.903482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.903492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.903772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.903782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.904120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.904132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.904518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.904527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.904900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.904910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.905075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-12-09 11:44:47.905086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-12-09 11:44:47.905385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.905395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.905705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.905714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.906059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.906069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.906375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.906582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.906592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.906892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.906902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.907188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.907199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.907510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.907519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.907907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.908142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.908152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.908482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.908493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.908800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.908810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.909120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.909130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.909433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.909734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.909744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.910066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.910077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.910388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.910398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.910586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.910597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.910828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.910838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.911155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.911165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.911537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.911798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.911807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.912127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.912137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.912432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.912442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.912747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.912756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.913055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.913065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.913381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.913391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.913553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.913564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.913862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.913872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.914090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.914100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.914456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.914466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.914875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.915162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.915172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.915483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.915493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.915796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.915807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.916140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.916152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.916491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.916504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.916618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.916628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-12-09 11:44:47.916960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-12-09 11:44:47.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.917258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.917269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.917594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.917604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.917891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.917902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.918228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.918238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.918556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.918566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.918850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.918859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.919166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.919176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.919254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.919265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.919546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.919874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.920200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.920211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.920513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.920523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.920838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.920847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.921164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.921174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.921476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.921821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.921831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.922125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.922135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.922330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.922340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.922679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.922688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.923022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.923033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.923341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.923351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.923650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.923659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.923937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.923946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.924330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.924563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.924574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.924889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.924899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.925235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.925246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.925548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.925557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.925686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.925697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.926053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.926063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.926371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.926380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-12-09 11:44:47.926578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-12-09 11:44:47.926589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.926918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.927229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.927241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.927582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.927591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.927904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.928098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.928107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.928279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.928291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.928576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.928586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.928900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.928909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.929246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.929558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.929567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.929858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.929867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.930013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.930024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.930332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.930342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.930639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.930649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.930766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.930775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.931001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.931014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.931313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.931322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.931632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.931652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.931870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.931881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.932213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.932223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.932524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.932533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.932876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.932886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.933230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.933239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.933538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.933548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.933952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.933963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.934181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.934191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.934494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.934504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.934802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.934972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.934983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.935309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.935500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.935511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.935827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.936029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.936043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.936374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.936385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.936574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.936931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.936941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.937255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.937266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.937454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.937465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-12-09 11:44:47.937766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-12-09 11:44:47.937777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.938077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.938088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.938409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.938420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.938736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.938747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.938933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.938945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.939131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.939142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.939470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.939481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.939660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.939671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.939967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.939978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.940316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.940327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.940672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.940683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.940869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.940881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.941248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.941257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.941555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.941566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.941688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.941698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.941993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.942002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.942323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.942333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.942675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.942685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.942994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.943004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.943321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.943331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.943667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.943678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.944036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.944046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.944251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.944261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.944563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.944573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.944898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.944908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.945233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.945553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.945745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.945754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.946077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.946282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.946292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.946733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.946744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.947008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.947023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.947351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.947361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.947671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.947680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.948004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.948021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.948350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.948359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.948532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.948542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.948847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.948857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.949077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-12-09 11:44:47.949087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-12-09 11:44:47.949425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.949435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.949739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.949749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.950051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.950061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.950409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.950716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.950725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.951061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.951072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.951312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.951322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.951615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.951800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.951811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.951926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.951937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.952247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.952257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.952459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.952468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.952886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.952896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.953194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.953205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.953522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.953532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.953704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.953714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.953884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.953894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.954267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.954277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.954595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.954604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.954928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.954937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.955103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.955114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.955284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.955294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.955502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.955513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.955818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.955827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.956151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.956161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.956503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.956512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.956671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.956681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.956980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.956991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.957317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.957328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.957658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.957668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.957965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.957975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.958268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.958278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.958597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.958607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.958725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.958734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.959022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.959033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.959331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.959343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.959671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.959688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.960024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.960034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.960210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-12-09 11:44:47.960220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-12-09 11:44:47.960480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.960490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.960802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.960812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.961004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.961024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.961385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.961395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.961720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.961730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.961889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.961899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.962055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.962066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.962297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.962306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.962591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.962600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.962796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.962806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.963220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.963550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.963560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.963882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.963892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.964238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.964257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.964584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.964593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.964949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.964960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.965266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.965276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.965602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.965612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.965945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.965955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.966325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.966335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.966664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.966674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.966857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.966868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.967232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.967242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.967543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.967553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.967862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.967872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.968184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.968195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.968536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.968546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.968768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.968778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.969195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.969206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.969519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.969529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.969715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.969725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.970102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.970113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.970434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-12-09 11:44:47.970767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-12-09 11:44:47.970777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.971116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.971127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.971435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.971444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.971744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.971759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.971940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.971950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.972255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.972265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.972533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.972544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.972827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.972836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.973022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.973033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.973405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.973415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.973769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.973779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.974124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.974134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.974303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.974312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.974643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.974652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.974971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.974981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.975174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.975185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.975519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.975528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.975867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.975877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.976161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.976171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.976505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.976515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.976809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.976819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.977176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.977186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.977569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.977887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.977897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.978192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.978517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.978528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.978870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.978880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.979196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.979206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.979391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.979402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.979771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.980056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.980066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.980386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.980396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.980720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.980730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.980965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.980974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.981259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.981269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.981591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.981602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.981916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.981927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.982254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.982264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-12-09 11:44:47.982641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-12-09 11:44:47.982651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.982961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.982972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.983288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.983299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.983630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.983641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.983978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.983988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.984306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.984654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.984664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.984955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.984965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.985329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.985647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.985658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.985873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.985884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.986262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.986272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.986578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.986588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.986874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.986883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.987078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.987088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.987425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.987435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.987755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.987765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.988079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.988089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.988402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.988411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.988750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.988759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.989108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.989118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.989446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.989455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.989662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.989671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.989998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.990008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.990352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.990362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.990696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.990707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.991017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.991028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.991237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.991246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.991432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.991443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.991645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.991656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.991987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.991997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.992301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.992312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.992506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.992517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.992836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.992846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.993163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.993173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.993514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.993529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.993854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.993864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.994145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-12-09 11:44:47.994470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-12-09 11:44:47.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.994696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.994705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.995017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.995027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.995364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.995374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.995687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.995696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.995883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.996270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.996280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.996618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.996630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.996825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.997147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.997157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.997470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.997480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.997691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.997701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.998035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.998334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.998343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.998635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.998646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.998957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.998968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.999285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.999295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.999590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.999599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:47.999901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:47.999911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.000099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.000111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.000304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.000414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.000688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.000698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.000881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.000892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.001184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.001196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.001568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.001578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.001920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.002223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.002234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.002442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.002453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.002813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.002823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.003163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.003173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.003343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.003353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.003634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.003643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.003863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.003873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.004186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.004503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.004513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.004821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.004831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.005148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.005158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.005549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-12-09 11:44:48.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-12-09 11:44:48.005862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.005872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.006046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.006065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.006372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.006751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.006761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.007065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.007075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.007384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.007393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.007607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.007616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.008006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.008020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.008335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.008347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.008630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.008640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.008941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.008951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.009261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.009271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.009555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.009565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.009767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.009777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.010049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.010059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.010491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.010500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.010827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.010838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.011159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.011169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.011487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.011497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.011787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.011796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.012094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.012104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.012417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.012427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.012747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.012757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.013093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.013404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.013588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.013598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.013891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.013901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.014194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.014204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.014504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.014513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.014837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.014846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.015145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.015162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.015503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.015513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.015806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.015817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.016107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.016117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.016426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.016437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.016772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.016782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.017072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.017083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-12-09 11:44:48.017397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-12-09 11:44:48.017406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.017601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.017611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.017934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.017944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.018241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.018251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.018577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.018588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.018962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.018973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.019323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.019333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.019530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.019540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.019858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.019869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.020205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.020215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.020519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.020531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.020867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.020879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.021187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.021197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.021368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.021378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.021607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.021617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.021864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.021873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.022194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.022203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.022377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.022388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.022707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.022716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.023020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.023030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.023312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.023322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.023545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.023556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.023758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.023768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.024106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.024426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.024751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.024762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.025109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.025119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.025424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.025434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.025747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.025756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.025919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.025929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.026272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.026283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.026578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.026587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.026884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.026894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.027186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.027196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.027574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-12-09 11:44:48.027584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-12-09 11:44:48.027885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.027895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.028228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.028552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.028562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.028899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.028909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.029184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.029194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.029502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.029512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.029818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.029828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.030112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.030122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.030414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.030731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.030741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.030929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.030939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.031157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.031339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.031350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.031550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.031559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.031868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.031878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.032132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.032143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.032481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.032494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.032820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.032830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.033412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.033421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.033719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.033728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.034045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.034056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.034288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.034298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.034613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.034623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.035002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.035015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.035318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.035329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.035559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.035569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.035938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.036285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.036295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.036609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.036618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.036907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.036922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.037247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.037257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.037638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.037648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.037963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.037972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.038359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.038369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.038709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.038719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.039059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-12-09 11:44:48.039069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-12-09 11:44:48.039359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.039368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.039695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.039704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.040080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.040091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.040404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.040414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.040723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.041051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.041456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.041465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.041766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.041776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.041966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.041975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.042285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.042295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.042607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.042617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.042760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.042770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.043024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.043326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.043336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.043635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.043645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.043939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.043948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.044281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.044291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.044598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.044608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.044829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.045151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.045163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.045458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.045468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.045638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.045649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.045865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.045875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.046143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.046154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.046449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.046459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.046813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.046823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.047109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.047119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.047437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.047446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.047763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.047772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.048172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.048183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.048520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.048713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.048724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.049087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.049394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.049404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.049692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.049702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.050014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.050286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-12-09 11:44:48.050295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-12-09 11:44:48.050629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.050639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.050876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.050886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.051184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.051195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.051505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.051515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.051819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.051829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.052122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.052132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.052518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.052527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.052859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.052868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.053185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.053197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.053573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.053583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.054055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.054066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.054311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.054598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.054608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.054938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.054948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.055182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.055497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.055507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.055698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.055709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.056032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.056042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.056362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.056372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.056690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.056976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.056986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.057291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.057304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.057590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.057650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.057660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.057983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.057993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.058386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.058396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.058698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.058708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.059008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.059027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.059270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.059280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.059570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.059580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.059919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.059929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.060250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.060260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.060557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.060566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.060892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.060902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.061231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.061590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.061600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.061800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.061809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.062150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-12-09 11:44:48.062161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-12-09 11:44:48.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.062504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.062819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.062829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.063131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.063149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.063487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.063497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.063793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.063982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.063993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.064308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.064318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.064626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.064636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.064831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.064843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.065172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.065182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.065471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.065482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.065815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.065824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.066198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.066208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.066396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.066406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.066694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.067038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.067049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.067395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.067404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.067688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.067925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.067935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.068234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.068244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.068540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.068857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.068867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.069181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.069191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.069519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.069873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.069883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.070055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.070065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.070374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.070383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.070731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.070741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.071041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.071051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.071378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.071699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.071709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.072082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.072093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.072411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-12-09 11:44:48.072421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-12-09 11:44:48.072713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.072724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.072880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.072891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.073204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.073214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.073397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.073408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.073714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.073724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.074038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.074048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.074399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.074408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.074695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.074710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.074895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.074907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.075225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.075236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.075573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.075584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.075930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.075941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.076220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.076231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.076532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.076866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.076877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.077172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.077182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.077401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.077779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.077789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.078085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.078095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.078407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.078417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.078720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.078730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.078947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.079161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.079451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.079461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.079768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.079778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.080091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.080101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.080417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.080427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.080626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.080637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.080958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.080968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.081254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.081265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.081615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.081626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.081935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.082108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.082120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.082436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.082445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.082825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.082835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.083140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.083150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.083470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.083480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.083800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.083810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-12-09 11:44:48.084109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-12-09 11:44:48.084119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.084452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.084461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.084769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.084779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.085168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.085179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.085489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.085499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.085710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.085720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.086041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.086051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.086243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.086253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.086543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.086563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.086900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.086909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.087234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.087246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.087582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.087592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.087967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.087978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.088335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.088345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.088730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.088739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.089049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.089060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.089390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.089400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.089744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.090017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.090027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.090327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.090337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.090658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.090668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.090963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.090973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.091310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.091320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.091623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.091633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.091976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.091987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.092183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.092195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.092470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.092481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.092686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.092697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.092922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.092934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.093155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.093504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.093515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.093705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.093717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.094020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.094031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.094376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.094385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.094686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.094696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.095025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.095035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.095234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.095244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.095537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.095547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.095835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-12-09 11:44:48.095845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-12-09 11:44:48.096158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.096168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.096455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.096764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.096774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.097091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.097101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.097427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.097436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.097807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.097816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.098247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.098257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.098586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.098596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.098893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.099232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.099243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.099594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.099903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.099913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.100233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.100243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.100844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.100855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.101180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.101191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.101500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.101510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.101849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.101858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.102139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.102347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.102358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.102673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.102684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.102899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.102909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.103232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.103242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.103567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.103577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.103894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.103904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.104197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.104207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.104522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.104531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.104878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.104888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.105233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.105244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.105543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.105553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.105873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.105882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.106100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.106111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.106435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.106445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.106737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.106746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.107102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.107112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.107411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.107428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.107741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.107752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.108085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.108095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.108327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.108336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-12-09 11:44:48.108663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-12-09 11:44:48.108672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.108981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.108991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.109261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.109271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.109576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.109888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.109898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.110203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.110214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.110507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.110516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.110681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.110691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.111007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.111246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.111256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.111658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.111669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.112345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.112355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.112709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.112719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.112942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.112953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.113269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.113475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.113484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.113785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.113794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.114094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.114105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.114415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.114424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.114742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.115067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.115411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.115422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.115720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.115730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.116042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.116052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.116386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.116395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.116690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.116699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.116987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.116997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.117314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.117323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.117644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.117660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.117952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.117963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.118273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.118638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.118649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.118979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.118990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.119325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.119337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.119519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.119531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.119796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.119806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.119988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.119999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-12-09 11:44:48.120356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-12-09 11:44:48.120367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.120550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.120561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.120826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.121150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.121160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.121536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.121545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.121850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.121860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.122186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.122197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.122510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.122520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.122707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.122717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.123020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.123335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.123345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.123659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.123669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.123919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.123929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.124245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.124255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.124583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.124594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.124887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.125223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.125233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.125540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.125550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.125867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.125876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.126268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.126278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.126582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.126593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.126897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.127232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.127242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.127495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.127506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.127798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.128149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.128159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.128450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.128459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.128773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.128783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.128987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.128997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.129327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.129337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.129660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.129671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.129851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.129862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.130164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.130174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.130490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.130500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-12-09 11:44:48.130801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-12-09 11:44:48.130811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.131163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.131174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.131468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.131479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.131827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.131837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.132046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.132056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.132394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.132404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.132589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.132599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.132924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.132934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.133093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.133103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.133453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.133463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.133797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.133808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.134143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.134153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.134470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.134480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.134796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.134806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.135116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.135126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.135505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.135516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.135739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.135750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.136041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.136052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.136366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.136377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.136560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.136571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.136876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.136886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.137172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.137184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.137522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.137532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.137828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.137838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.138138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.138149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.138504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.138515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.138834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.138844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.139153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.139163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.139521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.139532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.139747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.139759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.140067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.140077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.140457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.140468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.140782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.140793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.141190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.141200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.141515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.141844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.141854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.142193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.142203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.142337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.142348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.142649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-12-09 11:44:48.142659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-12-09 11:44:48.142956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.142967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.143289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.143301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.143618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.143628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.143979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.143989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.144370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.144381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.144696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.144705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.144994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.145004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.145328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.145338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.145640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.145651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.145936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.145946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.146232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.146251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.146549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.146559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.146750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.146760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.147103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.147414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.147424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.147719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.147729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.147999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.148008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.148302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.148623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.148634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.148919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.148929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.149228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.149238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.149552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.149563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.149873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.149884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.150228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.150238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.150617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.150627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.150940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.150950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.151267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.151277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.151527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.151820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.151830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.152132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.152142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.152485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.152496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.152834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.152846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.153182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.153192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.153496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.153506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.153831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.153841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.154133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.154143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.154456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.154465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.154767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-12-09 11:44:48.154777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-12-09 11:44:48.155095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.155105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.155306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.155316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.155506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.155516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.155793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.155804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.156129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.156140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.156434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.156444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.156806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.156817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.157133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.157433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.157444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.157772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.157781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.158078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.158088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.158402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.158412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.158716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.158727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.159019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.159030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.159341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.159351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.159671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.159990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.160001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.160343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.160353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.160736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.160746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.161094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.161448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.161458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.161759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.161777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.162114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.162124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.162445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.162463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.162794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.162804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.163182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.163193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.163496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.163505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.163890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.163901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.164215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.164225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.164582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.164593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.164890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.164901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.165251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.165262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.165566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.165579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.165921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.165933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.166289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.166301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.166617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.166627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.166982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.166993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.167360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.167372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.167710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-12-09 11:44:48.167721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-12-09 11:44:48.167899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.168249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.168260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.168438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.168450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.168674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.168685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.169003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.169020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.169349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.169360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.169632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.169642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.169964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.169974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.170362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.170372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.170711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.170721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.171057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.171067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.171386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.171397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.171724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.171734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.172039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.172378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.172388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.172685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.172695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.172977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.172987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.173161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.173173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.173488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.173498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.173802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.173813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.174130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.174141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.174447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.174793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.175156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.175167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.175408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.175738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-12-09 11:44:48.175930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-12-09 11:44:48.175941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Read completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.061 Write completed with error (sct=0, sc=8) 00:29:56.061 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 [2024-12-09 11:44:48.176183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Read completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 Write completed with error (sct=0, sc=8) 00:29:56.062 starting I/O failed 00:29:56.062 [2024-12-09 11:44:48.176923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.062 [2024-12-09 11:44:48.177442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.177550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-12-09 11:44:48.177783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.177795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-12-09 11:44:48.178121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.178131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-12-09 11:44:48.178468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.178478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-12-09 11:44:48.178657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.178667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-12-09 11:44:48.179028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-12-09 11:44:48.179039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.179392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.179404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.179732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.179744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.180040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.180050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.180220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.180230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.180623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.180963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.180974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.181294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.181306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.181588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.181599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.181905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.181916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.182168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.182179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.182479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.182490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.182813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.182824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.183149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.183161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.183507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.183521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.183864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.183877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.184073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.184085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.184355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.184367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.184687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.184698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.185022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.185034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.185292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.185303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.185625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.185976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.185988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.186317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.186328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.186623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.186634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.338 [2024-12-09 11:44:48.186985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.338 [2024-12-09 11:44:48.186997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.338 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.187334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.187346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.187688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.187699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.187894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.187906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.188250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.188585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.188596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.188914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.188925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.189108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.189120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.189474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.189826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.189837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.190259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.190271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.190453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.190465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.190664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.190676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.190871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.190882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.191261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.191569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.191580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.191803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.191815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.192165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.192176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.192518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.192530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.192755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.192767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.192972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.192983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.193321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.193333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.193539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.193551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.193883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.194198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.194552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.339 [2024-12-09 11:44:48.194563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.339 qpair failed and we were unable to recover it. 00:29:56.339 [2024-12-09 11:44:48.194767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.194778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.194956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.194967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.195375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.195385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.195688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.195701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.195989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.195999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.196499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.196510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.196747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.196757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.196987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.196997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.197386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.197396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.197689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.197700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.197988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.197998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.198305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.198315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.198638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.198648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.198941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.198959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.199310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.199321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.199665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.199675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.199995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.200006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.200228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.200238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.200556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.200567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.200882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.200893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.201230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.201242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.201588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.201598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.201784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.201794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.202136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.202146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.202470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.202480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.340 qpair failed and we were unable to recover it. 00:29:56.340 [2024-12-09 11:44:48.202791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.340 [2024-12-09 11:44:48.202802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.203172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.203183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.203520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.203531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.203725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.203735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.204057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.204067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.204382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.204392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.204696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.204902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.204913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.205271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.205282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.205603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.205614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.205924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.205936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.206293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.206303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.206609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.206618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.206815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.206825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.207077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.207088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.207413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.207423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.207717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.207727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.208025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.208037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.208358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.208368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.208554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.208564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.208910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.208919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.209143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.209154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.209373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.209719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.209729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.209954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.209965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.210291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.210301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.210625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.210635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.341 qpair failed and we were unable to recover it. 00:29:56.341 [2024-12-09 11:44:48.210847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.341 [2024-12-09 11:44:48.210857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.211175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.211186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.211382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.211392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.211721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.212051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.212061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.212387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.212397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.212692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.212702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.213003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.213017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.213333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.213661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.214053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.214347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.214357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.214668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.214678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.214851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.214862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.215078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.215088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.215285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.215296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.215479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.215489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.215804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.215814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.216157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.216168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.216478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.216489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.216817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.216828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.217203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.217213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.217601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.217610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.217933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.217943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.218150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.218160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.218446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.218456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.218669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.218679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.219001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.219017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.219223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.219233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.219553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.219885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.219900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.220245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.220256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.220467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.220477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.220801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.220811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.221138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.342 [2024-12-09 11:44:48.221148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.342 qpair failed and we were unable to recover it. 00:29:56.342 [2024-12-09 11:44:48.221512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.221522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.221870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.221881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.222065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.222075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.222265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.222274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.222566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.222577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.222910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.222920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.223360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.223370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.223670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.223681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.223847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.223857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.224066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.224077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.224380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.224390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.224723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.224733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.225043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.225361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.225485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.225494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.225752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.225762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.226107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.226117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.226428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.226439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.226776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.226787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.227107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.227117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.227440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.227451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.227664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.227674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.227969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.227980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.228289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.228299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.228619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.228629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.228839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.229071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.229081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.229410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.229421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.229646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.229655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.229828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.229838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.343 [2024-12-09 11:44:48.230135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.343 [2024-12-09 11:44:48.230145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.343 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.230328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.230339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.230410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.230611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.230623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.230972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.230982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.231042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.231056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.231358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.231368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.231695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.232020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.232031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.232354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.232560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.232570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.232960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.232970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.233253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.233264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.233587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.233597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.233770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.233779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.234158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.234168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.234497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.234507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.234856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.235271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.235281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.235687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.235871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.235881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.236282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-12-09 11:44:48.236292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-12-09 11:44:48.236687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.236697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.236867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.236878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.237253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.237263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.237579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.237589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.237833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.237844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.238189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.238200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.238497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.238507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.238730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.238741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.239064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.239074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.239402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.239644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.239850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.239860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.240175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.240186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.240405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.240415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.240753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.240763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.241093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.241105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.241295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.241304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.241591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.241602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.241695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.241704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.241859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.241869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.242207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.242217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.242508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.242518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.242868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.242878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.243176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.243189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.243390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.243400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.243687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-12-09 11:44:48.243698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-12-09 11:44:48.244000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.244017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.244303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.244315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.244663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.244673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.244979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.244996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.245336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.245347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.245542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.245554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.245886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.245897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.246236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.246638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.246647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.247021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.247031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.247363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.247373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.247580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.247590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.247900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.247910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.248250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.248561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.248571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.248944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.248954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.249299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.249309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.249628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.249637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.249948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.249959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.250145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.250156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.250446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.250457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.250773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.250784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.251095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.251106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.251409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-12-09 11:44:48.251719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-12-09 11:44:48.251728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.252057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.252067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.252416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.252425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.252614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.252935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.252945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.253256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.253266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.253586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.253596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.253926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.253935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.254163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.254173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.254505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.254853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.254863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.255199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.255209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.255380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.255391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.255771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.255783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.256064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.256074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.256322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.256333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.256656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.256666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.256837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.256848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.257251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.257261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.257553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.257564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.257907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.257917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.258231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.258241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.258553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.258562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.258995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.259236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.259246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.259573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.259583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-12-09 11:44:48.259943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-12-09 11:44:48.259954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.260255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.260265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.260472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.260482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.260796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.260807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.261137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.261441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.261451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.261741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.261751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.261979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.261990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.262293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.262304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.262583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.262593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.262920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.262930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.263242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.263253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.263571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.263581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.263966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.263976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.264193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.264203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.264591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.264600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.264916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.264928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.265313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.265618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.265973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.265983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.266272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.266283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.266579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.266589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.266877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.266887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.267218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.267229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.267551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.268262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-12-09 11:44:48.268272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-12-09 11:44:48.268722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.268735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.269046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.269056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.269372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.269382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.269553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.269564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.269876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.269886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.270210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.270220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.270510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.270527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.270816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.270826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.271139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.271150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.271468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.271478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.271792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.271802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.272148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.272158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.272469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.272478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.272801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.272811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.273194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.273204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.273399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.273410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.273711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.273721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.274050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.274060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.274362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.274380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.274676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.274687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.274960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.274971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.275310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.275610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.275920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.275931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.276251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.276261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.276441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-12-09 11:44:48.276770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-12-09 11:44:48.276779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.277127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.277461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.277470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.277857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.277868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.278204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.278214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.278542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.278552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.278861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.278870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.279267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.279277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.279593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.279603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.279946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.279955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.280137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.280149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.280426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.280435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.280669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.280679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.280992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.281002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.281321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.281335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.281672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.281682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.281930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.281940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.282255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.282266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.282424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.282435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.282759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.282768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.283057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.283068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.283394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.283404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.283629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.283639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.283970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.284286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.284296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.284601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.284611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.284921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.284930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.285240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-12-09 11:44:48.285250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-12-09 11:44:48.285556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.285566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.285872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.285881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.286274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.286284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.286608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.286955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.286966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.287273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.287284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.287596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.287607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.287791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.287802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.288169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.288179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.288517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.288528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.288839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.288848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.289187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.289198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.289520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.289859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.289870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.290179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.290189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.290476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.290486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.290860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.290870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.291166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.291177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.291378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.291709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.291718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.291910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.291921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.292257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.292557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.292567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.292883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.292893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.293168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.293179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.293476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.293486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.293813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.294143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.294153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.294454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.294464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-12-09 11:44:48.294816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-12-09 11:44:48.295106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.295434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.295774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.295784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.296136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.296146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.296430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.296440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.296754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.296764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.296962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.296972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.297288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.297298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.297622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.297632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.297946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.297956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.298258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.298269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.298561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.298570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.298895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.298905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.299231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.299241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.299550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.299560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.299884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.299894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.300246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.300438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.300448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.300601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.300922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.301251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.301261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.301651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.301661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.301995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.302004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.302391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.302402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.302739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.302749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.352 qpair failed and we were unable to recover it. 00:29:56.352 [2024-12-09 11:44:48.303091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.352 [2024-12-09 11:44:48.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.303465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.303475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.303818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.303828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.304166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.304176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.304465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.304475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.304828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.304837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.305141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.305160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.305481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.305491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.305784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.305801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.306144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.306155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.306461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.306471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.306786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.306795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.307126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.307136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.307448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.307459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.307836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.307848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.308161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.308171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.308462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.308478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.308825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.308834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.309129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.309139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.309332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.309343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.309663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.309673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.309970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.309979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.310172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.310182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.310504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.310514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.310681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.310692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.311097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.311107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.311403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.353 [2024-12-09 11:44:48.311414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.353 qpair failed and we were unable to recover it. 00:29:56.353 [2024-12-09 11:44:48.311724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.311735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.312066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.312077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.312378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.312387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.312689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.312699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.313033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.313350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.313360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.313741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.313750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.314151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.314161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.314474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.314484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.314799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.314809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.315143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.315153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.315505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.315839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.315848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.316176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.316186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.316496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.316506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.316842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.317149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.317159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.317468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.317477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.317818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.317828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.318119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.318129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.318322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.318332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.318662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.318672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.319018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.319030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.319309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.319318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.319623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.319634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.319949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.319958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.354 [2024-12-09 11:44:48.320171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.354 [2024-12-09 11:44:48.320181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.354 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.320485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.320820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.320830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.321170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.321181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.321486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.321497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.321836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.321845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.322132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.322142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.322474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.322484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.322780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.322790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.322978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.323160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.323170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.323490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.323500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.323802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.323812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.324106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.324116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.324428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.324438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.324821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.324830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.325049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.325059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.325229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.325238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.325598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.325608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.325941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.325951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.326263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.326273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.326662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.326672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.326996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.327006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.327341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.327351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.327682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.327693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.328002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.328026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.328355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.328365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.355 [2024-12-09 11:44:48.328755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.355 [2024-12-09 11:44:48.328764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.355 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.329113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.329417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.329427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.329744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.329754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.330073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.330083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.330464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.330473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.330820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.330830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.331150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.331160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.331468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.331478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.331795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.332163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.332173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.332381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.332390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.332675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.332685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.332999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.333013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.333357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.333368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.333684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.333693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.334005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.334019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.334323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.334333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.334466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.334477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.334799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.334809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.335125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.335135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.335321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.335331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.335666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.335676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.335971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.356 [2024-12-09 11:44:48.335981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.356 qpair failed and we were unable to recover it. 00:29:56.356 [2024-12-09 11:44:48.336172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.336183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.336523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.336533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.336830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.336841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.337122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.337132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.337444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.337756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.337767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.338073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.338083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.338421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.338431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.338780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.338790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.339120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.339131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.339465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.339474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.339817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.339828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.340155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.340165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.340493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.340503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.340850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.340862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.341219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.341229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.341547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.341557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.341898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.341907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.342204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.342214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.342556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.342566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.342876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.342886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.343078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.343089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.343314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.343324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.343534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.343545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.343690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.343700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.357 [2024-12-09 11:44:48.344025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.357 [2024-12-09 11:44:48.344035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.357 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.344332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.344343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.344642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.344652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.344970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.345291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.345301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.345627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.345637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.345987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.345996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.346290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.346302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.346487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.346497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.346814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.346824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.347173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.347509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.347519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.347860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.347871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.348181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.348193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.348524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.348854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.348865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.349140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.349150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.349468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.349478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.349772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.349784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.350108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.350409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.350420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.350736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.350746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.351043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.351055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.351358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.351563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.351574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.351919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.351929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.352243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.352253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.358 qpair failed and we were unable to recover it. 00:29:56.358 [2024-12-09 11:44:48.352565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.358 [2024-12-09 11:44:48.352574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.352896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.353207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.353543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.353553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.353873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.353883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.354074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.354086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.354375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.354385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.354701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.354711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.355019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.355357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.355367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.355658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.355668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.355886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.355896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.356222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.356232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.356556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.356567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.356915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.356926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.357236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.357246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.357570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.357581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.357950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.357961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.358274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.358286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.358629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.358640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.358954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.358966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.359294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.359305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.359502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.359513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.359858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.359869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.360184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.360194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.360521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.360532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.360840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.360850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.361235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.359 [2024-12-09 11:44:48.361245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.359 qpair failed and we were unable to recover it. 00:29:56.359 [2024-12-09 11:44:48.361624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.361634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.361808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.361818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.362119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.362130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.362531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.362542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.362854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.362864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.363183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.363193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.363514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.363524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.363845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.364165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.364176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.364498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.364508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.364846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.364855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.365244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.365254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.365556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.365885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.365895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.366222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.366235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.366450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.366460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.366640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.366649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.366820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.366830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.367117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.367127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.367330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.367341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.367536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.367546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.367861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.367872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.368176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.368187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.368576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.368587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.368899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.360 [2024-12-09 11:44:48.368909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.360 qpair failed and we were unable to recover it. 00:29:56.360 [2024-12-09 11:44:48.369100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.369111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.369479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.369778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.369788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.370447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.370458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.370800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.370809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.371195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.371205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.371541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.371759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.371769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.372002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.372026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.372334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.372343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.372658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.372668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.372978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.372988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.373313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.373323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.373550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.373560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.373890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.373900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.374312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.374322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.374521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.374531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.374851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.375170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.375181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.375497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.375507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.375808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.375819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.376106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.376117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.376432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.376442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.376755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.376765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.361 qpair failed and we were unable to recover it. 00:29:56.361 [2024-12-09 11:44:48.377080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.361 [2024-12-09 11:44:48.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.377415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.377425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.377735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.377745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.378042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.378365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.378378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.378744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.378753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.378955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.378966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.379322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.379332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.379697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.379709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.380049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.380059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.380370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.380381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.380688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.380699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.381087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.381098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.381410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.381421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.381743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.381754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.382070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.382081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.382368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.382379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.382672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.382682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.383003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.383018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.383304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.383315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.383660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.383669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.383963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.383975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.384331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.384341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.384708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.384718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.385030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.385333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.385343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.385655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.385667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.362 qpair failed and we were unable to recover it. 00:29:56.362 [2024-12-09 11:44:48.385988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-12-09 11:44:48.385999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.386209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.386220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.386516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.386527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.386865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.387189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.387201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.387510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.387869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.387879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.388202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.388212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.388523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.388533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.388867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.388878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.389193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.389204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.389518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.389528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.389920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.389932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.390254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.390265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.390563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.390572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.390892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.390903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.391205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.391216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.391540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.391554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.391878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.391889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.392201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.392212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.392432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.392443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.363 [2024-12-09 11:44:48.392663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-12-09 11:44:48.392673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.363 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.393020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.393032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.393343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.393674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.393684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.393983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.393995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.394144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.394155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.394467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.394480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.394705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.394716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.394970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.395304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.395315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.395608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.395619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.395813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.395822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.396143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.396153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.396452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.396462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.396665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.396981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.397296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.397306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.397610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.397620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.397939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.397949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.398249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.398260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.398601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.398613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.398912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.398923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.399267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.399596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.399607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.399932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.399944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.400310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.400321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.400518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.400529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.400823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-12-09 11:44:48.400835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.364 qpair failed and we were unable to recover it. 00:29:56.364 [2024-12-09 11:44:48.401172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.401183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.401403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.401413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.401738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.401747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.402082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.402093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.402438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.402448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.402748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.402759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.403068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.403079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.403280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.403289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.403621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.403632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.403935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.404115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.404127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.404461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.404471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.404665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.404675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.405072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.405082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.405402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.405412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.405604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.405615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.405822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.406156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.406166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.406450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.406460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.406767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.406988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.407279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.407291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.407583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.407594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.407774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.407785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.408092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.408102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.408415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.408733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.408744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.365 qpair failed and we were unable to recover it. 00:29:56.365 [2024-12-09 11:44:48.409089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-12-09 11:44:48.409100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.409452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.409465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.409684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.409695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.409867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.409878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.410183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.410193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.410515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.410526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.410835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.411155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.411166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.411477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.411496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.411837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.411847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.412143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.412154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.412473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.412483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.412782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.412793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.413097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.413108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.413437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.413634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.413645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.414079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.414090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.414321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.414330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.414526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.414828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.414840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.415156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.415167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.415489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.415501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.415880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.415890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.416194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.416206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.416528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.416538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.416765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.416776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.417016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.417026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.417358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.366 [2024-12-09 11:44:48.417369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.366 qpair failed and we were unable to recover it. 00:29:56.366 [2024-12-09 11:44:48.417691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.417702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.418027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.418434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.418445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.418742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.418752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.419062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.419072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.419387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.419398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.419756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.419768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.420094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.420106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.420458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.420468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.420781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.420791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.421086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.421096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.421287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.421298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.421574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.421585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.421935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.421945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.422285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.422304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.422626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.422635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.422956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.422966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.423279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.423289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.423591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.423600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.423814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.423824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.424159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.424170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.424514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.424525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.424875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.424886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.425220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.425231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.425524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.425540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.425858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.426206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.426389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.367 [2024-12-09 11:44:48.426399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.367 qpair failed and we were unable to recover it. 00:29:56.367 [2024-12-09 11:44:48.426733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.426743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.427055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.427065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.427449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.427649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.427659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.427970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.427980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.428163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.428176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.428348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.428358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.428560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.428570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.428897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.428907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.429236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.429247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.429453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.429464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.429775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.429785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.430108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.430273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.430284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.430559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.430899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.430909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.431314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.431324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.431623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.431642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.431868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.431878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.432239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.432249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.432564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.432574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.432881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.432892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.433202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.433212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.433516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.433526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.433858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.433868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.434046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.434058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.434388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.368 [2024-12-09 11:44:48.434399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.368 qpair failed and we were unable to recover it. 00:29:56.368 [2024-12-09 11:44:48.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.434572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.434897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.434907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.435248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.435259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.435588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.435605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.435931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.435941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.436277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.436288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.436591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.436601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.436833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.436843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.437177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.437187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.437517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.437528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.437832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.437842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.438095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.438106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.438454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.438465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.438862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.438874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.439181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.439192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.439540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.439551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.439898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.439909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.440234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.440438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.440451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.440683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.440693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.441018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.441362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.441372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.369 qpair failed and we were unable to recover it. 00:29:56.369 [2024-12-09 11:44:48.441692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.369 [2024-12-09 11:44:48.441702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.442066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.442077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.442409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.442419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.442627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.442969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.442979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.443287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.443298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.443610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.443620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.443717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.443728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.443960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.443970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.444287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.444298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.444639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.445020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.445031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.445321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.445331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.445652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.445663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.445986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.445997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.446292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.446303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.446554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.446565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.446881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.446892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.447278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.447290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.447619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.447971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.447981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.448279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.448290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.448650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.448661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.448982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.448993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.449310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.449320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.449630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.449640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.449869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.449880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.370 [2024-12-09 11:44:48.450218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.370 [2024-12-09 11:44:48.450229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.370 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.450620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.450630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.450959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.450968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.451190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.451201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.451540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.451550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.451875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.451885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.452189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.452412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.452424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.452763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.453121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.453134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.453457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.453468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.453694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.453704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.454126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.454136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.454439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.454450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.454755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.454765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.455088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.455099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.455273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.455284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.455646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.455656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.456001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.456021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.456316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.456326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.456529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.456539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.456865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.456875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.457086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.457096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.457488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.457499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.457720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.457730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.458052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.458063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.458438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.458449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.458782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.458792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.459139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.459151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.459376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.459387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.459707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.459719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.459969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.459981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.460236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.460248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.460549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.460963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.371 [2024-12-09 11:44:48.460973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.371 qpair failed and we were unable to recover it. 00:29:56.371 [2024-12-09 11:44:48.461182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.461193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.461528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.461539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.461843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.461853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.462157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.462167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.462498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.462509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.462854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.462864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.463086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.463100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.463418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.463428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.463748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.463759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.463964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.464171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.464182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.464398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.464410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.464515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.464807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.464819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.465037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.465050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.465374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.465701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.466021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.466032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.466260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.466270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.466594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.466604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.466925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.466935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.467249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.467259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.467497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.467507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.467751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.468087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.468098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.468397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.468407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.468694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.468704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.468992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.469002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.469188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.469201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.469503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.469513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.469844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.469854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.470171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.470182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.470404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.470415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.470735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.470746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.471090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.471331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.471342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.471643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.471653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.471948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.471958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.372 qpair failed and we were unable to recover it. 00:29:56.372 [2024-12-09 11:44:48.472301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.372 [2024-12-09 11:44:48.472311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.472619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.472629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.472951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.472962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.473251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.473264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.473545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.473556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.473873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.473884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.474062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.474073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.474412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.474423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.474722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.474732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.475059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.475291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.475302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.475618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.475628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.475962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.475972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.476301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.476312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.476602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.476613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.476983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.476992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.477356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.477367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.477734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.478069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.478398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.478409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.478748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.478759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.478999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.479009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.479362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.479373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.479583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.479593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.479843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.479855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.480203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.480522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.480532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.480844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.480854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.481173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.481184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.481497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.481507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.481834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.481844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.482150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.482161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.482469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.482479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.482795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.482805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.483134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.483145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.483370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.483381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-12-09 11:44:48.483607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-12-09 11:44:48.483617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.483988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.484391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.484405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.484636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.484647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.484885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.485194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.485205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.485380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.485391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.485680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.485695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.485936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.485947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.486315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.486326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.486623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.486634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.486975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.487299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.487311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.487520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.487530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.487844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.487856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.488058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.488068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.488374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.488384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.488761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.488772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.489081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.489093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.489441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.489452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.489774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.489784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.490101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.490480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.490788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.490808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.490970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.659 [2024-12-09 11:44:48.490980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.659 qpair failed and we were unable to recover it. 00:29:56.659 [2024-12-09 11:44:48.491273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.491283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.491613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.491624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.491959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.491969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.492154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.492574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.492584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.492875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.492885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.493233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.493243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.493637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.493648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.493880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.493890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.494121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.494133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.494340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.494701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.494711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.495076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.495086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.495275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.495285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.495623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.495633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.495984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.495996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.496349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.496360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.496694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.496993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.497003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.497194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.497205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.497399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.497410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.497758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.497769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.498098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.498112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.498418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.498428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.498629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.498641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.498968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.498978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.499362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.499372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.660 qpair failed and we were unable to recover it. 00:29:56.660 [2024-12-09 11:44:48.499664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.660 [2024-12-09 11:44:48.499675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.500044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.500365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.500376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.500693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.500703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.501050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.501060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.501265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.501647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.501658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.501968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.501978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.502299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.502311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.502661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.502674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.502906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.502918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.503253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.503638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.503976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.503986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.504306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.504317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.504622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.504633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.504982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.504992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.505324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.505647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.505658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.506023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.506034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.506362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.506372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.506764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.506774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.507100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.507111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.507343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.507354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.507694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.507704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.507934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.507944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.508270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.508280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.508699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.508710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.509004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.509021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.509372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.661 [2024-12-09 11:44:48.509382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.661 qpair failed and we were unable to recover it. 00:29:56.661 [2024-12-09 11:44:48.509718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.509736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.510076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.510087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.510413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.510423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.510770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.510780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.511115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.511127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.511313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.511326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.511543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.511554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.511889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.511899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.512226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.512237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.512566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.512576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.512886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.512896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.513215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.513228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.513569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.513580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.513940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.514154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.514164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.514481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.514492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.514852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.514863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.515183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.515194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.515499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.515511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.515853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.515863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.516168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.516179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.516496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.516506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.516821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.516831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.517143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.517153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.662 [2024-12-09 11:44:48.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.662 [2024-12-09 11:44:48.517463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.662 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.517822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.518133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.518144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.518454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.518465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.518774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.518785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.519105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.519115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.519436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.519447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.519770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.519781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.519976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.519987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.520172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.520424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.520435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.520662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.520673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.521022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.521032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.521368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.521380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.521582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.521591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.521908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.521918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.522240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.522251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.522650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.522663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.522980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.523329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.523340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.523660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.523671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.523985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.524371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.524383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.524694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.524706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.525092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.525104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.525437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.525447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.525526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.525537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.525813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.525823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.526217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.663 [2024-12-09 11:44:48.526229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.663 qpair failed and we were unable to recover it. 00:29:56.663 [2024-12-09 11:44:48.526555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.526566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.526880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.527209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.527220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.527521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.527532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.527869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.527878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.528185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.528195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.528389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.528401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.528723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.528733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.529057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.529067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.529259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.529268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.529575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.529585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.529751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.529762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.529976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.530285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.530295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.530612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.530622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.530924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.530934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.531258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.531269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.531584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.531594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.531933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.531943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.532177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.532189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.532530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.532540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.532840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.532850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.533163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.533175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.533487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.533498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.533568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.533579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.533800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.533811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.534129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.534140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.534455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.534465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.534671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.534681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.535027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.535325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.535336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.535653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.535663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.535969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.535982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.536310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.664 [2024-12-09 11:44:48.536644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.664 [2024-12-09 11:44:48.536655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.664 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.536984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.536995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.537308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.537318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.537644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.537654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.537891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.537902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.538231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.538243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.538551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.538561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.538876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.538886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.539242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.539253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.539591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.539602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.539907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.539919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.540238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.540248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.540580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.540591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.540958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.540970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.541272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.541284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.541611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.541621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.541935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.541946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.542136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.542148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.542251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.542262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.542634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.542684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.543023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.543035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.543424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.543586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.543596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.543934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.543988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.544781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.545284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.545336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.545620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.545634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.545976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.546330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.546341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.546674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.546685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.547022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.547033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.547371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.547383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.547621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.547974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.548210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.548223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.548591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.548822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.548833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.549045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.549056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.549300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.549311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.549518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.549529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.549801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.549812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.550156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.550167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.550458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.550468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.550721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.550732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.550974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.550984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.551290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.551301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.551586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.551597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.551913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.665 [2024-12-09 11:44:48.551926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.665 qpair failed and we were unable to recover it. 00:29:56.665 [2024-12-09 11:44:48.552244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.552256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.552574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.552584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.552951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.552962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.553308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.553319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.553527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.553539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.553901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.553914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.554234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.554246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.554472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.554484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.554817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.554828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.555074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.555392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.555405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.555754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.555764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.555968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.555979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.556335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.556346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.556671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.556683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.556999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.557349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.557361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.557707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.557718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.557934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.558108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.558119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.558462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.558472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.558863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.558874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.559065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.559076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.559340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.559557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.559887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.559897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.560272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.560593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.560603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.560853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.560865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.561219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.561231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.561621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.561634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.561836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.561847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.562032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.562043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.562491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.562501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.562804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.562817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.563138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.563148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.563446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.563456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.563637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.563646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.563933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.563943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.564148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.564159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.564450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.564460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.564657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.564667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.565029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.565231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.565242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.565575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.666 [2024-12-09 11:44:48.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.666 qpair failed and we were unable to recover it. 00:29:56.666 [2024-12-09 11:44:48.565921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.565931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.566120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.566132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.566495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.566505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.566705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.567058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.567069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.567401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.567412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.567617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.567630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.567910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.567920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.568251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.568263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.568648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.568658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.569005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.569328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.569339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.569531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.569544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.569955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.569968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.570162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.570175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.570472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.570484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.570817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.570827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.571151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.571162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.571474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.571484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.571553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.571853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.667 [2024-12-09 11:44:48.571865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.667 qpair failed and we were unable to recover it. 00:29:56.667 [2024-12-09 11:44:48.572200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.572211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.572521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.572532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.572746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.572983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.572993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.573333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.573344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.573556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.573566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.573837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.573848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.574176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.574188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.574386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.574396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.574518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.574528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.574861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.574872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.575220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.575232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.575621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.575631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.575944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.575956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.576274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.576284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.576601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.576613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.576915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.576926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.577239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.577256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.577596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.577608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.577928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.577940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.578241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.578253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.578448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.578459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.578791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.579085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.579096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.579428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.579439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.579825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.580157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.580168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.580514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.580524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.580749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.580761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.581116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.581127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.581330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.581577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.581588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.581819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.581830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.582155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.582167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.582391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.582414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.582608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.582618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.582952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.582962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.583296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.583307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.583639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.583651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.583976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.583986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.584289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.584300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.584624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.584634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.584983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.584993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.585311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.585324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.585651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.585662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.585852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.585864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.586189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.586202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.586527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.586537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.586849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.586860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.587230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.587568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.587586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.587916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.587927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.588335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.588346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.588674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.588684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.589023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.589364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.589376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.589728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.589739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.590091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.590103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.590431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.590442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.590645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.590660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.590877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.590888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.591095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.591154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.591499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.591509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.591738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.591748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.592066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.592078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.592377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.592389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.668 [2024-12-09 11:44:48.592737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.668 [2024-12-09 11:44:48.592748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.668 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.593080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.593261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.593272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.593584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.593595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.593905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.593917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.594208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.594219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.594529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.594548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.594907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.594919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.595033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.595044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.595336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.595348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.595686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.595697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.596084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.596097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.596418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.596430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.596726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.596777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.597073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.597083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.597407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.597418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.597764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.597776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.597980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.597992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.598313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.598324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.598541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.598551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.598843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.598858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.599191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.599202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.599523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.599534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.599854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.599866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.600086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.600096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.600419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.600429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.600753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.600765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.601112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.601124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.601434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.601447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.601782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.601793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.603000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.603050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.603416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.603428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.603741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.603752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.604078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.604089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.604418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.604752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.604763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.604952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.604963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.605307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.605319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.605648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.605658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.605991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.606003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.606327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.606338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.606637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.606971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.606981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.607289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.607300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.607640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.607653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.607839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.607851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.608097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.608108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.608428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.608440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.608744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.608754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.609080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.609406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.609688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.609698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.609902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.610209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.610220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.610476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.610487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.610793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.610803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.611081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.611092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.611407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.611417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.611637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.611647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.611954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.611964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.612337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.612349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.612550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.612560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.612878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.612889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.613245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.613256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.613643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.613653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.613973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.613982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.614303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.614314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.614658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.614669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.614967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.614977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.615312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.615323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.615652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.615670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.615868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.616202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.616214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.669 [2024-12-09 11:44:48.616532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.669 [2024-12-09 11:44:48.616542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.669 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.616840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.616852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.617198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.617208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.617408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.617419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.617748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.617757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.618077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.618087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.618401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.618729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.619063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.619073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.619383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.619394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.619741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.619751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.620136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.620148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.620469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.620479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.620791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.620801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.621153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.621164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.621466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.621478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.621798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.621809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.622139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.622150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.622472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.622482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.622659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.622671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.623006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.623021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.623347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.623365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.623733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.623745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.624046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.624057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.624400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.624411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.624752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.625066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.625402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.625413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.625568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.625873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.625885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.626191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.626503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.626513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.626828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.626838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.627153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.627164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.627459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.627479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.627772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.628078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.628089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.628402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.628413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.628765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.629101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.629112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.629515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.629525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.629757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.629767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.629948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.629961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.630258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.630268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.630563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.630574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.630885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.630895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.631265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.631275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.631592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.631602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.631914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.631924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.632257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.632267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.632596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.632607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.633001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.633016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.633356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.633366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.633657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.633667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.633971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.633981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.634272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.634282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.634468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.634478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.634764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.634776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.635071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.635081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.635436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.635447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.635757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.635766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.636086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.636097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.636392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.636402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.636716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.636726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.637030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.637040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.637397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.637409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.637718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.637727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.638112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.638123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.638480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.638490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.638806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.638818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.670 [2024-12-09 11:44:48.639167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.670 [2024-12-09 11:44:48.639177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.670 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.639563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.639573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.639935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.640324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.640336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.640675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.641038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.641048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.641355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.641366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.641694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.641705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.642005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.642020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.642331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.642726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.642736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.643054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.643064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.643383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.643393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.643779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.643790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.644126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.644137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.644452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.644462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.644840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.644850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.645148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.645158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.645475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.645485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.645718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.646035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.646045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.646394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.646405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.646593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.646603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.646965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.646975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.647280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.647291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.647632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.647642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.647935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.647945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.648133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.648436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.648446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.648771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.648782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.649091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.649102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.649432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.649442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.649776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.649787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.650130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.650141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.650483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.650494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.650777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.650790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.651093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.651408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.651735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.651745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.652075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.652085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.652442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.652835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.652847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.653075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.653087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.653388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.653398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.653572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.653584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.653869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.653880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.654192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.654202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.654507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.654524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.654858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.654868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.655183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.655193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.655480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.655491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.655782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.655792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.656088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.656099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.656405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.656422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.656722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.656732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.657030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.657041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.657391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.657702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.657713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.658023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.658033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.658363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.658374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.658716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.658726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.659109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.659121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.659466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.659476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.659790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.660177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.660188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.660489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.660500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.660829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.660839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-12-09 11:44:48.661155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-12-09 11:44:48.661168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.661517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.661527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.661825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.661837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.662180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.662191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.662459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.662469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.662761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.662771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.663085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.663095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.663390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.663400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.663727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.663745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.664072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.664082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.664433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.664446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.664783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.664794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.665135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.665147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.665490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.665500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.665844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.665854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.666148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.666159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.666532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.666542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.666886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.666897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.667209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.667220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.667539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.667549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.667859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.667868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.668155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.668165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.668481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.668491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.668868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.668879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.669077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.669088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.669417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.669427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.669728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.669739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.670104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.670117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.670410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.670428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.670634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.670644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.670839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.670851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.671178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.671190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.671506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.671516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.671858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.671872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.672225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.672561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.672572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.672898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.672908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.673303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.673314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.673612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.673622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.673896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.674223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.674234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.674578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.674589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.674931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.674941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.675259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.675269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.675588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.675599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.675954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.675964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.676322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.676333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.676538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.676548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.676822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.676832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.677119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.677130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.677440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.677781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.677792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.678083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.678094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.678414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.678423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.678707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.678718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.679028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.679038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.679342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.679353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.679694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.679705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.680053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.680064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.680378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.680389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.680691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.680701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.680871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.680883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.681170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.681181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.681468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.681478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.681804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.681814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.682096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.682107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.682444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.682456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.682797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.682808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.683113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.683332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.683341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.683721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.683731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-12-09 11:44:48.684040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-12-09 11:44:48.684050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.684352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.684362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.684664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.684675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.684971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.684982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.685194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.685205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.685490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.685501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.685833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.685844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.686492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.686503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.686784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.686795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.687099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.687110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.687427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.687759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.687769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.688140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.688151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.688327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.688339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.688653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.688665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.688973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.688983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.689171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.689183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.689504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.689513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.689813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.689823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.690119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.690129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.690442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.690452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.690784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.690794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.691179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.691190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.691523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.691850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.691870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.692095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.692106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.692417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.692763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.692773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.693107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.693117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.693436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.693447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.693818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.693830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.694052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.694063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.694355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.694679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.694689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.695093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.695105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.695362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.695373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.695680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.695691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.696026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.696405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.696416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.696748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.697043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.697054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.697329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.697338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.697622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.697633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.697833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.697844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.698166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.698176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.698504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.698514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.698764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.698774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.699006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.699038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.699349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.699359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.699665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.699675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.700065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.700078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.700392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.700404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.700632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.700643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.700987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.700997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.701352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.701362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.701699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.701710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.702043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.702053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.702594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.702605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.702994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.703004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.703334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.703344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.703549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.703560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.703884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.703895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.704230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-12-09 11:44:48.704240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-12-09 11:44:48.704548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.704560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.704936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.704948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.705168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.705179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.705584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.705934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.705945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.706159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.706171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.706494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.706831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.706844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.707201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.707486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.707496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.707862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.707872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.708195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.708206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.708576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.708586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.708877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.708887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.709084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.709402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.709412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.709708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.709718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.710159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.710171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.710471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.710482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.710864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.711221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.711232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.711394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.711406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.711752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.711763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.712107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.712118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.712310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.712320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.712664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.712674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.712863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.712873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.713064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.713074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.713386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.713398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.713749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.713924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.713935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.714276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.714502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.714512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.714791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.715154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.715165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.715533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.715735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.715746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.716077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.716088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.716321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.716331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.716531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.716541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.716717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.716729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.717086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.717096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.717418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.717428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.717790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.717800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.718029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.718040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.718408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.718589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.718599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.718826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.718836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.719110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.719121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.719415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.719750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.719760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.720056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.720067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.720360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.720369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.720539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.720550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.720858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.720868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.721194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.721501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.721518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.721839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.721848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.722178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.722188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.722517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.722528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.722714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.722726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.723036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.723047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.723370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.723706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.723716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.724105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.724116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.724454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.724800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.725026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.725037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.725204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.725215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.725589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.725599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.725941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.725951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.726260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-12-09 11:44:48.726271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-12-09 11:44:48.726645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.726655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.726973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.726983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.727188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.727198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.727471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.727482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.727812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.727823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.728131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.728141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.728458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.728468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.728853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.728862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.729178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.729189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.729514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.729524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.729819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.729834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.730022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.730033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.730358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.730367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.730663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.730680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.731022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.731440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.731450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.731763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.731774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.732101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.732111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.732409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.732420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.732738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.732750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.732938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.732948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.733293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.733303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.733528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.733538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.733863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.733873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.734182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.734193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.734451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.734461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.734639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.734648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.734906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.734917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.735225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.735237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.735435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.735732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.735743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.736053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.736064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.736380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.736709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.736719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.737019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.737029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.737343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.737354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.737569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.737579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.737773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.737785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.737991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.738002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.738334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.738344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.738640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.738652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.738971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.738982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.739342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.739646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.739657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.739866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.739878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.740189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.740200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.740403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.740413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.741062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.741072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.741250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.741262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.741606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.741855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.741865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.742192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.742203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.742511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.742520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.742810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.742820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.743020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.743030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.743259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.743270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.743489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.743499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.743878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.743890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.744231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.744242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.744535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.744905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.744916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.745288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.745298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.745655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.745665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.745960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.745973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.746300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.746312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.746618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.746629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.746925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.746935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.747162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.747472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.747481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.747794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.747805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.747990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.748001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-12-09 11:44:48.748350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-12-09 11:44:48.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.748676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.748686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.749027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.749038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.749417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.749428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.749776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.750094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.750105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.750429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.750439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.750830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.750840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.751157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.751168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.751480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.751490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.751783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.751793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.752096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.752107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.752404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.752415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.752753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.752764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.753101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.753112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.753406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.753424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.753724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.753734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.754059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.754069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.754393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.754403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.754589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.754601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.754944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.755270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.755280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.755470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.755480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.755771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.755781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.756120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.756131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.756343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.756354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.756687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.756992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.757002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.757369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.757379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.757564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.757575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.757909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.757919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.758249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.758260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.758580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.758590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.758898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.758909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.759228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.759238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.759516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.759526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.759861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.759871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.760068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.760079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.760458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.760468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.760782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.761117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.761127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.761443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.761453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.761652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.761662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.761971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.761981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.762273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.762283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.762571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.762581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.762875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.762886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.763202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.763504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.763514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.763827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.763836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.764145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.764156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.764454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.764464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.764770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.765050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.765060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.765381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.765391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.765714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.765723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.766029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.766039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.766396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.766731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.766743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.767088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.767099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.767436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.767451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-12-09 11:44:48.767671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-12-09 11:44:48.767681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.768000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.768015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.768219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.768230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.768449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.768460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.768792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.768802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.769110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.769121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.769418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.769428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.769724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.770057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.770068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.770274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.770284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.770604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.770614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.770945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.770955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.771270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.771280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.771454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.771466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.771801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.772124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.772135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.772472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.772483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.772822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.772833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.773122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.773132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.773450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.773460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.773785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.774085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.774096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.774410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.774420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.774805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.774817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.775048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.775059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.775299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.775310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.775639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.775651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.775979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.776307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.776317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.776612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.776622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.776857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.777196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.777207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.777600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.777610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.777911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.777921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.778242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.778252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.778609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.778620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.778930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.778939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.779257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.779268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.779450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.779460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.779773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.779783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.780093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.780104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.780429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.780439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.780824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.780835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.781175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.781185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.781500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.781511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.781697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.781708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.782019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.782029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.782438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.782448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.782743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.782755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.783130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.783141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.783441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.783673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.783683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.784031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.784041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-12-09 11:44:48.784346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-12-09 11:44:48.784357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.784646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.784656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.785030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.785041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.785258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.785268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.785570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.785580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.785910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.785920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.786160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.786170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.786457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.786789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.786798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.787088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.787099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.787406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.787416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.787725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.787735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.788071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.788082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.788464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.788474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.788779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.788789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.789098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.789109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.789427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.789768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.789779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.790120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.790131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.790418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.790427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.790749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.791054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.791064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.791367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.791377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.791681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.792047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.792353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.792363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.792655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.792665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.792857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.792866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.793100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.793111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.793437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.793735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.793745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.794061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.794072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.794390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.794400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.794721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.794731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.795040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.795338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.795348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.795727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.795737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.796042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.796053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.796333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.796343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.796672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.796682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.796973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.796984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.797329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.797344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.797634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.797645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.797946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.797957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-12-09 11:44:48.798167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-12-09 11:44:48.798177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.798468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.798479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.798861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.798872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.799185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.799196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.799513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.799523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.799831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.800166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.800176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.800488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.800498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.800791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.800800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.801101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.801112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.801424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.801433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.801739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.801749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.802043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.802053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.802359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.802371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.802658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.802669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.803021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.803032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.803338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.803639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.803650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.803991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.804000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.804193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.804204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.804518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.804528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.804872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.804882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.805176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.805186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.805475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.805485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.805773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.805785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.806005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.806020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.806355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.806365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.806673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.806683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.807014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.807025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.807400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.807409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.807716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.807726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.807908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.807918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.808227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.808239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.808540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.808550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.808752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.808763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.955 [2024-12-09 11:44:48.809082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.955 [2024-12-09 11:44:48.809092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.955 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.809294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.809304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.809633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.809643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.809979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.809989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.810313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.810323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.810626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.810636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.810992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.811003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.811383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.811393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.811700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.811709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.812042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.812359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.812369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.812674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.812684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.812969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.812979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.813156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.813169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.813391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.813403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.813724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.813735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.814042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.814054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.814374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.814385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.814692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.814702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.815085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.815096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.815412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.815422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.815635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.815645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.815974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.815984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.816276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.816287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.816625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.816635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.816907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.816917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.817220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.817231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.817521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.817531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.817836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.817846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.818136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.818449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.818459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.818756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.818767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.818974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.818984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.819250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.819260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.819641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.819652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.819963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.819975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.820293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.820303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.820590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.820606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.820910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.956 [2024-12-09 11:44:48.820921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.956 qpair failed and we were unable to recover it. 00:29:56.956 [2024-12-09 11:44:48.821267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.821278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.821620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.821631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.821994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.822005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.822317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.822328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.822677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.822688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.822966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.822977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.823297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.823308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.823607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.823618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.823976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.824284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.824296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.824636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.824647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.824983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.824994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.825365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.825377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.825610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.825621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.825965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.825976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.826204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.826215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.826522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.826533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.826864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.826875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.827173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.827186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.827470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.827480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.827775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.827785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.828072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.828082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.828400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.828410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.828708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.828717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.829057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.829067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.829386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.829396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.829584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.829595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.829903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.829913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.830184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.830195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.830476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.830485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.830808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.830818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.831105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.831115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.831460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.831470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.831755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.831765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.832103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.832114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.832455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.832465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.832749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.832759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.833082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.833092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.833403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.833413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.957 qpair failed and we were unable to recover it. 00:29:56.957 [2024-12-09 11:44:48.833707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.957 [2024-12-09 11:44:48.833725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.834036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.834046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.834361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.834371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.834662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.834672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.834960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.834979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.835309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.835319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.835625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.835637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.835938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.835950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.836275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.836608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.836618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.836923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.836933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.837251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.837261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.837570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.837580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.837920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.837931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.838090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.838102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.838446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.838457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.838673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.838685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.839000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.839015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.839326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.839336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.839710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.839719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.839923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.840195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.840206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.840434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.840737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.840747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.841129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.841139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.841192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.841203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.841478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.841487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.841807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.841816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.842145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.842156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.842535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.842545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.842877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.842887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.843273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.843284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.843580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.843590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.843892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.843905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.844262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.844274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.844666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.844676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.844974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.844984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.845339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.845349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.845658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.958 [2024-12-09 11:44:48.845668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.958 qpair failed and we were unable to recover it. 00:29:56.958 [2024-12-09 11:44:48.845978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.845989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.846318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.846328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.846616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.846627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.846944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.846955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.847270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.847280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.847579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.847589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.847878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.847889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.848175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.848185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.848484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.848495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.848789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.848799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.849091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.849102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.849323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.849334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.849648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.849657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.850044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.850310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.850320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.850609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.850620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.851022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.851032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.851230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.851240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.851683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.851693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.851861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.851872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.852210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.852220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.852456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.852466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.852775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.852785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.853109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.853119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.853430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.853439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.853736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.853746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.854047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.854058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.854259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.854597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.854906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.854916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.855227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.855238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.855434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.855445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.959 [2024-12-09 11:44:48.855767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.959 [2024-12-09 11:44:48.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.959 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.856019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.856029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.856212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.856222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.856542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.856552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.856847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.857077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.857412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.857423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.857759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.857769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.858092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.858103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.858440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.858450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.858830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.858840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.859035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.859046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.859349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.859360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.859671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.859681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.859985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.859994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.860317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.860327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.860629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.860639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.860938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.860948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.861287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.861298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.861493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.861504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.861874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.861884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.862198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.862208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.862500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.862829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.862840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.863154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.863165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.863456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.863778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.863787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.864075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.864085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.864369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.864379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.864677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.864687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.864984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.864998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.865375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.865387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.865660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.865670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.865990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.866000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.866319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.866329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.866703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.866713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.866985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.867327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.867338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.867643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.867653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.960 [2024-12-09 11:44:48.867859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.960 qpair failed and we were unable to recover it. 00:29:56.960 [2024-12-09 11:44:48.868169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.868180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.868569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.868579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.868870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.868880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.869170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.869181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.869499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.869509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.869846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.869855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.870159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.870169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.870497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.870507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.870699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.870709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.871024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.871035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.871333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.871625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.871635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.871930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.871940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.872239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.872255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.872553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.872563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.872900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.872910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.873180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.873191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.873587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.873847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.873857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.874096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.874106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.874418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.874428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.874721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.874731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.875071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.875082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.875291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.875301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.875579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.875868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.875878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.876230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.876241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.876550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.876560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.876879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.876889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.877202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.877547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.877557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.877849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.877859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.878183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.878193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.878510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.878521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.878851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.878861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.879171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.879182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.879506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.879517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.879877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.879888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.880156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.880167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.961 [2024-12-09 11:44:48.880451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.961 [2024-12-09 11:44:48.880462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.961 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.880772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.880782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.881161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.881171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.881462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.881472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.881771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.881781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.882079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.882098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.882481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.882491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.882779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.882789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.883111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.883422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.883432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.883757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.883767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.884056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.884067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.884383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.884393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.884703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.885044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.885055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.885371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.885382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.885669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.885679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.885991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.886001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.886332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.886343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.886640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.886651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.886826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.886837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.887202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.887212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.887516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.887526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.887845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.888140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.888151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.888416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.888755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.888765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.888995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.889005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.889439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.889449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.889788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.889799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.890129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.890140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.890474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.890485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.890817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.890828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.891149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.891160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.891465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.891475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.891749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.891759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.892045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.892055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.892374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.962 [2024-12-09 11:44:48.892384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.962 qpair failed and we were unable to recover it. 00:29:56.962 [2024-12-09 11:44:48.892621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.892925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.892934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.893211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.893222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.893428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.893438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.893748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.893759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.894046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.894056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.894386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.894548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.894559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.894934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.894946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.895188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.895388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.895398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.895681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.895690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.896065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.896077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.896421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.896431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.896715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.896725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.897024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.897034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.897359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.897651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.897662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.897999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.898015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.898312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.898322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.898670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.898680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.898982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.898992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.899294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.899304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.899658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.899668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.899939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.899950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.900281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.900293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.900493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.900503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.900824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.900834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.901171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.901182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.901468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.901478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.901814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.901824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.902127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.902138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.902321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.902330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.902683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.902693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.902983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.902993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.903328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.903342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.903396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.903407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.903696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.903707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.904045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.963 [2024-12-09 11:44:48.904057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.963 qpair failed and we were unable to recover it. 00:29:56.963 [2024-12-09 11:44:48.904427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.904438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.904651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.904662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.904875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.905192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.905202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.905502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.905513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.905819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.905829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.906222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.906526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.906536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.906819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.906829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.907128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.907138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.907462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.907472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.907667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.907677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.907995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.908342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.908731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.909095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.909105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.909403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.909413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.909765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.909776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.910482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.910493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.910779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.910797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.911111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.911122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.911435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.911445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.911724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.911736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.912064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.912075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.912405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.912415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.912704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.912719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.913058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.913068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.913359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.913369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.913660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.913669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.913962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.913972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.914152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.914162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.914456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.914466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.964 [2024-12-09 11:44:48.914654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.964 [2024-12-09 11:44:48.914667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.964 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.914959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.914969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.915342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.915352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.915662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.915672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.915998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.916335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.916346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.916685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.916696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.917531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.917554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.917890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.918215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.918225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.918599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.918609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.918901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.918912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.919220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.919231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.919567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.919578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.919866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.919877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.920185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.920196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.920506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.920516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.920722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.920733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.921060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.921070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.921381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.921390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.921705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.921715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.922019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.922030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.922367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.922377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.922665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.922675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.922843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.923207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.923588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.923598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.923884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.923894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.924183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.924193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.924511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.924521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.924908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.924919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.925239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.925250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.925564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.925574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.925880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.925890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.926089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.926100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.926424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.926434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.926746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.927033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.927043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.927356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.927366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.927654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.927665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.965 qpair failed and we were unable to recover it. 00:29:56.965 [2024-12-09 11:44:48.928024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.965 [2024-12-09 11:44:48.928034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.928431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.928733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.928743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.929032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.929043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.929277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.929288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.929583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.929594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.929955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.930298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.930308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.930632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.930641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.930817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.930828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.931045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.931055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.931705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.931715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.932083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.932094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.932388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.932398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.932757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.932767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.933075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.933085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.933316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.933577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.933590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.933908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.933918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.934227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.934237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.934540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.934550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.934823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.934834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.935170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.935453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.935462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.935680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.935691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.936027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.936038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.936432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.936442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.936646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.936656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.937008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.937023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.937319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.937329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.937589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.937598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.937914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.937925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.938119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.938130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.938441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.938668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.938678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.938999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.939009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.939320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.939330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.939707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.940026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.966 [2024-12-09 11:44:48.940037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.966 qpair failed and we were unable to recover it. 00:29:56.966 [2024-12-09 11:44:48.940368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.940664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.940674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.940869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.940879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.941150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.941160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.941499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.941830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.941843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.942187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.942198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.942434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.942444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.942642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.942945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.942956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.943175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.943185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.943482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.943492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.943838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.943849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.944155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.944167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.944453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.944463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.944796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.944806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.945020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.945030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.945346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.945356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.945719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.945729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.946003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.946018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.946331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.946626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.946636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.946931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.946942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.947226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.947238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.947622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.947633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.947935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.948253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.948264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.948568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.948578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.948869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.948879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.949079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.949091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.949470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.949479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.949770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.949781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.950090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.950103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.950489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.950500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.950749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.950758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.951046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.951057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.951391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.951401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.951678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.951688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-12-09 11:44:48.951992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-12-09 11:44:48.952002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.952356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.952367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.952675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.952685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.953022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.953034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.953326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.953336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.953718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.953728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.954048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.954058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.954336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.954346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.954530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.954541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.954849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.954860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.955062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.955073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.955380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.955390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.955698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.955707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.955898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.955908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.956028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.956038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.956335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.956345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.956681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.956691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.956978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.956989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.957304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.957315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.957602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.957804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.957813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.958115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.958126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.958496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.958507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.958725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.958735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.958931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.958941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.959265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.959602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.959611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.959786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.959797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.960105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.960115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.960414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.960424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.960721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.960732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.961019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.961030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.961136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-12-09 11:44:48.961146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-12-09 11:44:48.961442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.961452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.961775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.961785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.961970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.962303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.962313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.962647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.962658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.963048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.963236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.963246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.963522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.963532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.963695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.963707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.964074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.964085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.964390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.964401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.964716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.965015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.965027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.965333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.965343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.965648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.965658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.965920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.966129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.966140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.966471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.966481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.966800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.967113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.967124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.967511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.967520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.967820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.967830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.968155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.968166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.968544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.968555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.968892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.968902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.969091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.969102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.969311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.969322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.969659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.969956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.969965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.970201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.970610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.970620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.970905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.970915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.971286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.971297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.971504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.971514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.971721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.971731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.971808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.971817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.972101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.972113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.972427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.972437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.972610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.972621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-12-09 11:44:48.972968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-12-09 11:44:48.972979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.973281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.973292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.973593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.973895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.973906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.974237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.974247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.974549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.974560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.974873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.974884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.975095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.975304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.975314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.975685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.975696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.976033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.976043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.976367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.976376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.976759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.976949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.977344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.977354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.977536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.977547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.977866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.977876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.978183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.978198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.978546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.978556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.978852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.978862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.979188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.979199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.979362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.979373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.979744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.979754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.980054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.980378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.980388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.980682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.980692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.980840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.981176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.981187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.981499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.981509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.981879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.981888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.982236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.982246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.982544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.982554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.982889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.982900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.983079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.983089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.983759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.983769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.984091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.984407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.984417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.984723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.985080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-12-09 11:44:48.985090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-12-09 11:44:48.985399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.985409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.985707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.985717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.986075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.986086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.986297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.986308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.986626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.986637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.986977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.986987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.987277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.987287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.987589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.987893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.987904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.988232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.988242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.988533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.988543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.988838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.989161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.989172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.989482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.989492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.989790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.989800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.990093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.990104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.990461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.990472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.990544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.990553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.990770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.990780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.991110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.991121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.991413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.991424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.991736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.991746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.992069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.992080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.992392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.992401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.992770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.992779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.992997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.993299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.993310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.993508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.993518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.993845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.994165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.994176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.994484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.994876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.994885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.995224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.995574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.995584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.995859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.995871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.996058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.996069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.996367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.996676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-12-09 11:44:48.996686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-12-09 11:44:48.997020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.997031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.997228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.997237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.997561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.997571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.997872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.997883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.998066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.998371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.998381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.998684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.998694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.998908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.999241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.999251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.999645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:48.999954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:48.999964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.000245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.000255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.000500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.000510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.000802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.000812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.001111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.001121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.001325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.001334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.001647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.001657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.001843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.001854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.002169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.002180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.002503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.002512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.002801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.002811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.003111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.003121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.003515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.003864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.003874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.004186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.004482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.004493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.004797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.004807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.005141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.005151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.005318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.005329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.005557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.005567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.005894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.005904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.006181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.006192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.006486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.006832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.006843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.007180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.007192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.007294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.007303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.007575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.007585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.007877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.007888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.008213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.008225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-12-09 11:44:49.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-12-09 11:44:49.008422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.008623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.008633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.008812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.008822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.009180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.009190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.009494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.009504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.009818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.010127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.010414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.010424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.010708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.010719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.011037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.011048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.011229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.011240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.011561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.011571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.011952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.011962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.012302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.012313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.012609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.012942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.012954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.013263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.013273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.013583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.013593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.013932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.013943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.014250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.014260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.014544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.014554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.014885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.015062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.015072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.015388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.015608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.015618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.015942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.015952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.016132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.016143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.016494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.016505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.016839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.016850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.017037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.017048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.017355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.017364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.017585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.017595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.017903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.018186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.018197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.018526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-12-09 11:44:49.018535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-12-09 11:44:49.018725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.018736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.019113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.019125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.019463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.019473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.019772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.020063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.020074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.020278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.020288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.020625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.020635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.020968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.020978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.021280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.021290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.021477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.021487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.021803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.021813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.022146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.022158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.022451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.022462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.022809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.022820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.023144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.023498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.023509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.023852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.023862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.024069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.024079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.024420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.024430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.024766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.024777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.025088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.025098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.025299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.025309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.025650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.025661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.025944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.025954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.026234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.026246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.026629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.026937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.026948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.027248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.027259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.027541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.027559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.027877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.027886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.028200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.028210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.028500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.028510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.028671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.028682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.029081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.029092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-12-09 11:44:49.029401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-12-09 11:44:49.029411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.029737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.029747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.029949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.029959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.030270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.030280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.030571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.030582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.030883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.030894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.031223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.031234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.031848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.031858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.032145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.032155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.032448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.032672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.032683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.032987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.032997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.033339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.033350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.033636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.033646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.033843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.034183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.034193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.034396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.034405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.034620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.034629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.034987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.034998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.035252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.035264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.035539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.035553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.035851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.036184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.036276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.036285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.036436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.036447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.036785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.037019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.037030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.037334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.037344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.037654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.037664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.037861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.037871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.038214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.038224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.038509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.038519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.038816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.038827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.039178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.039190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.039408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.039418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.039721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.039731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.039924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.039934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.040228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.040520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-12-09 11:44:49.040530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-12-09 11:44:49.040832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.040842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.041032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.041043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.041370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.041380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.041712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.041722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.042049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.042059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.042257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.042267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.042583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.042593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.042923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.043118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.043131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.043303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.043314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.043517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.043527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.043863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.043872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.044178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.044189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.044380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.044391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.044676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.044687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.045006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.045022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.045222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.045232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.045492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.045502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.045701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.045711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.045929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.045939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.046252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.046576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.046585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.046959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.046970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.047273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.047283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.047570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.047579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.047815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.047825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.048006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.048020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.048353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.048363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.048652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.048663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.048864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.049142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.049152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.049473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.049483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.049775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.049785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.049970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.050313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.050323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.050644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.050654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.050737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.050748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.051023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.051034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.051356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-12-09 11:44:49.051373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-12-09 11:44:49.051669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.051679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.051920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.051930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.052163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.052435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.052445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.052731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.052741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.053053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.053063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.053442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.053454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.053768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.053778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.054125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.054136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.054437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.054447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.054786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.054796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.054981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.054991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.055308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.055319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.055515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.055525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.055876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.055886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.056065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.056076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.056359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.056370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.056549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.056560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.056884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.056894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.057202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.057213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.057504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.057514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.057821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.057832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.058133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.058144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.058366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.058376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.058599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.058795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.058805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.059107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.059117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.059394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.059404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.059694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.059705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.060053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.060064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.060263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.060584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.060594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.060881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.060891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.061183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.061193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-12-09 11:44:49.061391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-12-09 11:44:49.061401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.061720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.061731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.062052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.062063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.062236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.062248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.062589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.062599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.062907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.062916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.063237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.063248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.063449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.063460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.063776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.063787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.064095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.064106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.064406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.064424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.064757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.065048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.065058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.065369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.065753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.066074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.066085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.066412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.066422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.066736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.066745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.067124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.067134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.067482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.067492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.067683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.067693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.068007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.068026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.068328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.068338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.068535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.068547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.068865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.068874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.069185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.069196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.069489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.069634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.069644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.069980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.069990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.070274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.070285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.070602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.070614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.070900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.070910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.071193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.071204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.071500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.071510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.071861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.071871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.072164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.072175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.072252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.072587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.072597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.072914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.072925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-12-09 11:44:49.073285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-12-09 11:44:49.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.073595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.073607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.073783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.073794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.074125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.074136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.074419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.074797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.075084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.075095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.075395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.075404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.075692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.075702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.076003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.076018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.076386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.076396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.076583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.076594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.076886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.077228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.077238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.077526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.077536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.077867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.077877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.078093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.078104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.078434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.078445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.078729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.078739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.078916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.078926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.079259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.079270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.079568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.079577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.079863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.079873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.080213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.080223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.080517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.080527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.080874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.080884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.081263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.081273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.081581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.081591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.081917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.081927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.082228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.082547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.082556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.082839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.082849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.083198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.083209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.083581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.083899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.083909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.084227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.084238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.084421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.084432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.084748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.085073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.085084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.085395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.979 [2024-12-09 11:44:49.085405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-12-09 11:44:49.085683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.085694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.085994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.086005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.086319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.086602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.086612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.086805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.086816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.087134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.087144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.087444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.087454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.087773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.087783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.088089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.088101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.088458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.088468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.088729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.088739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.089105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.089116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.089492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.089502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.089794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.089805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.089991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.090001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.090204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.090214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.090510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.090520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.090815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.090825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.091104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.091114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.091361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.091373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.091668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.091678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.092034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.092044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.092267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.092278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.092593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.092603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.092821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.092830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.093064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.093230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.093240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.093536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.093546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.093855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.093866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.094164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.094174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.094487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.094498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.094798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.094808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.095120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.095131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.095329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.095339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.095640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.095650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.095839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.095850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.096020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.096031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.096419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.096430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.096724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.980 [2024-12-09 11:44:49.096735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.980 qpair failed and we were unable to recover it. 00:29:56.980 [2024-12-09 11:44:49.097040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.097259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.097268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.097601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.097900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.097911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.098229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.098240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.098489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.098498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.098794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.098803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.099216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.099230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.099551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.099561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.099860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.099869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:56.981 [2024-12-09 11:44:49.100065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.981 [2024-12-09 11:44:49.100075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:56.981 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.100382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.100393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.100572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.100583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.100849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.100860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.101202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.101212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.101417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.101426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.101744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.101753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.102053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.258 qpair failed and we were unable to recover it. 00:29:57.258 [2024-12-09 11:44:49.102424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.258 [2024-12-09 11:44:49.102434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.102723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.102741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.103057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.103068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.103385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.103395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.103728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.103738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.103931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.104253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.104263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.104497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.104507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.104813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.105131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.105142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.105440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.105450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.105653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.105987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.105997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.106338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.106349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.106643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.106654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.106953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.106964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.107245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.107258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.107594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.107605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.107912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.107923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.108238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.108400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.108589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.108600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.108928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.108939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.109265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.109277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.109579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.109590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.109867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.109878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.110208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.110218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.110544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.110554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.110915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.110925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.111215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.111225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.111512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.111522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.111816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.111826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.112129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.112139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.112423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.112433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.112743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.112753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.113037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.113048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.113384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.113394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.113727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.113737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.259 [2024-12-09 11:44:49.114115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.259 [2024-12-09 11:44:49.114125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.259 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.114434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.114443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.114736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.114748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.115084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.115094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.115400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.115410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.115690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.115699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.115991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.116303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.116314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.116609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.116619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.116925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.116934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.117247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.117257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.117440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.117451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.117776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.117787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.118019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.118030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.118433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.118443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.118729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.118738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.119048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.119360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.119370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.119716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.119725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.119953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.120534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.120544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.120831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.120842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.121149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.121160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.121455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.121464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.121758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.121767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.122159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.122514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.122795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.122805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.123142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.123153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.123488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.123499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.123777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.123787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.124060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.124069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.124277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.124287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.124591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.124600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.124887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.125094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.125105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.125424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.125433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.125726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.125735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.260 qpair failed and we were unable to recover it. 00:29:57.260 [2024-12-09 11:44:49.126046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.260 [2024-12-09 11:44:49.126057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.126386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.126396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.126689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.126699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.126988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.126999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.127319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.127329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.127637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.127647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.127934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.127943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.128256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.128268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.128613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.128623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.128914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.128924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.129220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.129231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.129572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.129952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.129962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.130233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.130243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.130539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.130549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.130842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.130852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.131132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.131142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.131455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.131465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.131755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.131765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.132085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.132095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.132482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.132492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.132812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.133116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.133127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.133510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.133521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.133802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.133812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.134132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.134431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.134440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.134735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.135037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.135048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.135342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.135352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.135638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.135650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.135933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.135943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.136282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.136293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.136580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.136589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.136870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.136882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.137197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.137207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.137493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.137809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.137819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.138103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.261 [2024-12-09 11:44:49.138113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.261 qpair failed and we were unable to recover it. 00:29:57.261 [2024-12-09 11:44:49.138416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.138707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.138717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.139031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.139041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.139348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.139358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.139643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.139652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.139917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.139927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.140261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.140272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.140672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.140681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.140989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.140999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.141320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.141331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.141638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.141648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.141930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.142238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.142248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.142591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.142602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.142917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.142927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.143272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.143282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.143638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.143649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.143949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.143959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.144265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.144276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.144616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.144626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.144971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.145275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.145286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.145470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.145481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.145673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.145683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.145999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.146009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.146357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.146368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.146678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.146688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.146971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.146982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.147278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.147289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.147575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.147586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.147764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.147774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.148059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.148069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.148320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.148330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.148654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.148663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.148966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.148976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.149289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.149300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.149588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.149598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.149904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.262 [2024-12-09 11:44:49.149914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.262 qpair failed and we were unable to recover it. 00:29:57.262 [2024-12-09 11:44:49.150114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.150124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.150457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.150467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.150756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.150765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.150970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.150979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.151272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.151282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.151566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.151577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.151874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.151884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.152214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.152225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.152538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.152819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.152829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.153177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.153187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.153566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.153576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.153886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.153895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.154386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.154395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.154725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.154734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.154893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.154904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.155277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.155287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.155616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.155626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.156002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.156311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.156321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.156620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.156630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.156910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.156919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.157240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.157251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.157567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.157901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.157914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.158228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.158239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.158549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.158558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.158862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.158872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.159033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.159044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.159360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.159370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.159663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.159672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.263 [2024-12-09 11:44:49.159860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.263 [2024-12-09 11:44:49.159871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.263 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.160222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.160233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.160507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.160517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.160851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.160861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.161177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.161188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.161511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.161521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.161825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.161836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.162175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.162186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.162488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.162781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.162791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.163108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.163119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.163451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.163461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.163744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.163760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.164089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.164100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.164375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.164385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.164681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.164691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.164988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.164997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.165293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.165304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.165541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.165551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.165690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.165700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.165959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.165971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.166267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.166278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.166462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.166471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.166770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.166780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.167081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.167091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.167395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.167405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.167750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.167931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.167940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.168251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.168262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.168548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.168557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.168840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.168858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.169202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.169212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.169514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.169524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.169890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.169900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.170202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.170212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.170515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.170525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.170827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.170837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.171129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.171418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.264 [2024-12-09 11:44:49.171428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.264 qpair failed and we were unable to recover it. 00:29:57.264 [2024-12-09 11:44:49.171635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.171645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.171841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.171851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.172025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.172036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.172321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.172332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.172638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.172648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.172950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.172961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.173115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.173126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3718238 Killed "${NVMF_APP[@]}" "$@" 00:29:57.265 [2024-12-09 11:44:49.173408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.173418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.173713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.173724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.265 [2024-12-09 11:44:49.174026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.174038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.174126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.174135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.265 [2024-12-09 11:44:49.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.265 [2024-12-09 11:44:49.174779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.174791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.265 [2024-12-09 11:44:49.175067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.265 [2024-12-09 11:44:49.175290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.175300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.175602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.175612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.175913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.175923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.176114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.176124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.176338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.176347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.176658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.176670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.177008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.177023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.177289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.177299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.177617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.177627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.177814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.178127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.178137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.178477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.178811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.178820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.179172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.179183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.179506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.179516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.179793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.179804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.180128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.180138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.180434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.180445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.180761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.180772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.181118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.181129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.181447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.181458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.181760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.181771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.265 qpair failed and we were unable to recover it. 00:29:57.265 [2024-12-09 11:44:49.182067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.265 [2024-12-09 11:44:49.182078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.182400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.182719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.182730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3719268 00:29:57.266 [2024-12-09 11:44:49.183091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.183102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3719268 00:29:57.266 [2024-12-09 11:44:49.183407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.183417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.183607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.183619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3719268 ']' 00:29:57.266 [2024-12-09 11:44:49.183815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.183825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.266 [2024-12-09 11:44:49.184041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.184052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.266 [2024-12-09 11:44:49.184373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.184384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.266 [2024-12-09 11:44:49.184696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.266 [2024-12-09 11:44:49.184707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 11:44:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.266 [2024-12-09 11:44:49.185014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.185026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.185343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.185354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.185585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.185596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.185814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.185825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.186037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.186048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.186437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.186448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.186598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.186609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.186940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.186951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.187264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.187275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.187552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.187564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.187921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.187933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.188160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.188171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.188388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.188399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.188683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.188695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.189044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.189055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.189408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.189420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.189699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.189711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.190019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.190031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.190571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.190582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.190968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.191309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.191321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.191673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.191684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.191880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.191892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.192198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.266 [2024-12-09 11:44:49.192209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.266 qpair failed and we were unable to recover it. 00:29:57.266 [2024-12-09 11:44:49.192489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.192500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.192789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.192801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.193149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.193412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.193421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.193632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.193643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.193956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.193967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.194265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.194552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.194563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.194870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.195171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.195182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.195499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.195509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.195815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.195827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.196094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.196104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.196411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.196605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.196615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.196823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.196833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.197017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.197028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.197310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.197320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.197667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.197992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.198003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.198342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.198354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.198689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.198700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.198972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.198982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.199313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.199324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.199485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.199495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.199836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.199846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.200167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.200177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.200466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.200476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.200766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.200776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.201063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.201073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.201409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.201418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.201730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.201740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.201958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.201968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.202298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.202309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.267 [2024-12-09 11:44:49.202627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.267 qpair failed and we were unable to recover it. 00:29:57.267 [2024-12-09 11:44:49.202923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.202933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.203278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.203289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.203593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.203603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.203888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.203899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.204282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.204632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.204941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.204953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.205397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.205408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.205697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.205745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.206127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.206137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.206332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.206342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.206518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.206530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.206744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.207072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.207419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.207708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.207718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.208050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.208060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.208383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.208393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.208604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.208615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.208814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.208824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.209022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.209032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.209437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.209446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.209746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.209756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.210113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.210439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.210449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.210668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.210679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.210983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.210994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.211306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.211316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.211615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.211626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.211930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.211939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.212129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.212140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.212446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.212456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.212782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.212791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.213121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.213131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.213340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.213681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.213691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.213979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.213989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.214335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.214652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.268 [2024-12-09 11:44:49.214662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.268 qpair failed and we were unable to recover it. 00:29:57.268 [2024-12-09 11:44:49.214810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.214821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.214992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.215003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.215069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.215080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.215428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.215438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.215629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.215639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.215980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.215991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.216298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.216309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.216692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.216702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.217018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.217028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.217391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.217401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.217695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.217706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.218005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.218020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.218289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.218298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.218627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.218637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.218992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.219002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.219218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.219228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.219429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.219439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.219753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.219763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.220103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.220449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.220459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.220758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.220768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.220863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.220872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.221184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.221194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.221385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.221395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.221686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.221696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.222065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.222076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.222392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.222402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.222712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.222722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.222892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.222902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.223186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.223196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.223596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.223606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.223919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.223929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.224217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.224230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.224564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.224894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.225193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.225203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.225526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.225857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.225868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.226193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.269 [2024-12-09 11:44:49.226204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.269 qpair failed and we were unable to recover it. 00:29:57.269 [2024-12-09 11:44:49.226532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.226543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.226839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.226849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.227175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.227185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.227532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.227840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.228059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.228361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.228371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.228666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.228677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.228899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.228909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.229317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.229327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.229695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.229705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.230031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.230042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.230269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.230279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.230599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.230609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.230896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.230905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.231205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.231730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.231739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.231941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.231951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.232317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.232328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.232653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.232665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.232964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.232974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.233283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.233294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.233512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.233523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.233919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.233930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.234242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.234252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.234559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.234569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.234756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.234766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.234976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.234987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.235302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.235313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.235598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.235608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.235950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.235961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.236154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.236165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.236495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.236506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.236667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.236678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.236898] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:29:57.270 [2024-12-09 11:44:49.236951] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.270 [2024-12-09 11:44:49.237026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.237037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.237364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.237373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.237719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.270 [2024-12-09 11:44:49.237729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.270 qpair failed and we were unable to recover it. 00:29:57.270 [2024-12-09 11:44:49.238059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.238070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.238380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.238392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.238733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.238744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.239036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.239047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.239398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.239409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.239716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.239727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.240064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.240075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.240471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.240482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.240677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.240690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.240981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.240992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.241241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.241252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.241537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.241548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.241901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.241911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.242122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.242133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.242476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.242486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.242831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.242842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.243130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.243432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.243442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.243776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.243787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.244088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.244100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.244429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.244439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.244765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.244776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.244951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.244962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.245198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.245518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.245529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.245848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.245859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.246199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.246210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.246559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.246570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.246888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.246899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.247232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.247243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.247544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.247555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.247891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.247903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.248112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.248123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.248468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.248479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.248824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.249166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.249179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.249475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.249486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.249806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.249816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.250144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.271 [2024-12-09 11:44:49.250155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.271 qpair failed and we were unable to recover it. 00:29:57.271 [2024-12-09 11:44:49.250370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.250381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.250594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.250605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.250922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.250933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.251249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.251260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.251532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.251543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.251885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.251897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.252123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.252134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.252458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.252469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.252648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.252659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.252928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.252939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.253256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.253269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.253617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.253628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.254024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.254326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.254337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.254519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.254530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.254826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.254837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.255213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.255225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.255282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.255293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.255581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.255592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.255917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.255928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.256230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.256242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.256401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.256412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.256724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.256735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.257082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.257093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.257446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.257455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.257673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.257683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.257845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.257855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.258101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.258111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.258432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.258442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.258632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.258642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.258937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.258947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.259137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.259147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.259460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.259470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.259792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.259801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.259985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.272 [2024-12-09 11:44:49.259995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.272 qpair failed and we were unable to recover it. 00:29:57.272 [2024-12-09 11:44:49.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.260331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.260786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.260796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.260986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.260996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.261404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.261735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.261745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.262066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.262077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.262411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.262420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.262717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.262727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.263044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.263055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.263367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.263378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.263550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.263560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.263943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.263952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.264293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.264304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.264686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.264696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.264987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.264998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.265294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.265306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.265611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.265622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.265910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.265920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.266211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.266223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.266457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.266468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.266676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.266686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.267009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.267365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.267376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.267579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.267590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.267913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.267925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.268113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.268385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.268395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.268714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.268929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.268940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.269287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.269300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.269659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.269670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.269965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.269975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.270184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.270195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.270513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.270524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.270834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.270844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.271145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.271155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.271424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.271433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.271778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.271980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.273 [2024-12-09 11:44:49.271990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.273 qpair failed and we were unable to recover it. 00:29:57.273 [2024-12-09 11:44:49.272302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.272313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.272511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.272521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.272831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.272842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.273169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.273179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.273636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.273646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.273956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.273969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.274276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.274289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.274524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.274534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.274875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.274885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.275207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.275493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.275503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.275887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.275897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.276227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.276238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.276448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.276458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.276770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.276779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.276989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.277331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.277341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.277633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.277645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.277833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.277843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.278161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.278171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.278375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.278385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.278592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.278603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.278792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.278803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.279146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.279156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.279447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.279458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.279787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.279797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.280089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.280099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.280305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.280315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.280513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.280522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.280695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.280704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.280916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.280925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.281149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.281160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.281484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.281824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.281834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.282135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.282145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.282440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.282450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.282686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.282696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.283008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.283023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.283350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.274 [2024-12-09 11:44:49.283360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.274 qpair failed and we were unable to recover it. 00:29:57.274 [2024-12-09 11:44:49.283696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.283706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.284009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.284027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.284410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.284594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.284604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.284923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.284933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.285244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.285257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.285572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.285582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.285880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.286066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.286076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.286387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.286397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.286608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.286931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.287110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.287120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.287308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.287318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.287581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.287591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.287875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.287885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.288183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.288193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.288466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.288476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.288799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.288808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.288993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.289004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.289241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.289252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.289442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.289452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.289756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.289765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.290055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.290270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.290279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.290618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.290628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.290956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.290966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.291275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.291285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.291578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.291897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.291907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.292242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.292252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.292419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.292429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.292636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.292936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.292946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.293278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.293288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.293574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.293583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.293921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.293932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.294025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.294034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.294333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.275 [2024-12-09 11:44:49.294342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.275 qpair failed and we were unable to recover it. 00:29:57.275 [2024-12-09 11:44:49.294652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.294662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.294970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.294980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.295283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.295294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.295633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.295643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.295949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.295959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.296335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.296346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.296647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.296657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.297016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.297293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.297303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.297594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.297604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.297942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.297952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.298250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.298538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.298547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.298858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.299174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.299184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.299479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.299800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.299809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.299900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.299909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.300196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.300206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.300506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.300516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.300858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.300868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.301173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.301184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.301527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.301537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.301847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.301858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.301989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.301999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.302339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.302349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.302636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.302647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.302837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.302846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.303144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.303367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.303376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.303697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.303707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.303997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.304008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.304353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.304364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.304571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.304580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.304746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.276 [2024-12-09 11:44:49.304757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.276 qpair failed and we were unable to recover it. 00:29:57.276 [2024-12-09 11:44:49.305062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.305072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.305384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.305394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.305704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.306038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.306048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.306342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.306352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.306545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.306555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.306947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.306957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.307253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.307264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.307571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.307581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.307885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.307896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.308211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.308222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.308516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.308526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.308812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.308821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.309009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.309023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.309342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.309731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.309740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.310051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.310061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.310392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.310402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.310726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.310737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.311056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.311234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.311535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.311552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.311866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.311876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.312170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.312473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.312483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.312831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.312842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.313151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.313163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.313452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.313462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.313656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.313666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.313991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.314001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.314317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.314327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.314629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.314639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.314945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.314954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.315276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.315286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.315620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.315946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.315956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.316133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.316143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.316358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.316594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-12-09 11:44:49.316604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.277 qpair failed and we were unable to recover it. 00:29:57.277 [2024-12-09 11:44:49.316888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.317235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.317245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.317622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.317632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.317939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.317950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.318267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.318277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.318567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.318577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.318745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.318755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.318971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.318981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.319297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.319308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.319506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.319517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.319805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.320056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.320066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.320377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.320387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.320718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.320728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.321018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.321028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.321214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.321225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.321407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.321417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.321615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.321951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.321960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.322292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.322303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.322589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.322598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.322974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.322984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.323284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.323295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.323603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.323612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.323890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.323900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.324227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.324237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.324616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.324625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.324943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.324952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.325256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.325267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.325570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.325580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.325905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.325915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.326214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.326224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.326426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.326436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.326755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.326949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.326959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.327229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.327239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.327569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.327579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.327916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.328241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-12-09 11:44:49.328252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.278 qpair failed and we were unable to recover it. 00:29:57.278 [2024-12-09 11:44:49.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.328498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.328684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.328694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.329036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.329047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.329347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.329356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.329680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.330062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.330072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.330381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.330391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.330691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.330701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.331024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.331312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.331321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.331509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.331521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.331776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.331785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.332067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.332077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.332301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.332310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.332598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.332608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.332920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.332930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.333311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.333324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.333699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.333709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.334021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.334031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.334350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.334556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.334566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.334737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.334746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.335049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.335059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.335387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.335398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.335554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.335565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.335736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.279 [2024-12-09 11:44:49.335918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.335927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.336146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.336156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.336452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.336463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.336782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.336792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.337126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.337138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.337536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.337546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.337841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.337850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.338029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.338039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.338349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.338716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.338726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.339037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.339048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.339366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.339377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.339706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.339716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.340034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-09 11:44:49.340045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.279 qpair failed and we were unable to recover it. 00:29:57.279 [2024-12-09 11:44:49.340483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.340492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.340662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.340672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.340881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.341065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.341077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.341399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.341409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.341578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.341588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.341906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.341916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.342204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.342214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.342510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.342520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.342840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.343136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.343147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.343453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.343464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.343651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.343661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.344006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.344369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.344380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.344656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.344666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.345024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.345036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.345348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.345358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.345667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.345678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.346003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.346017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.346407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.346417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.346724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.346734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.347068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.347079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.347403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.347413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.347717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.347727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.347895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.347905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.348139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.348149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.348472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.348482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.348768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.349097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.349107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.349312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.349321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.349645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.349657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.350037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.350048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.350354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.350364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.350653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.350663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.350956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.350965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.351179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.351189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.351516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.351526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.351831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.280 [2024-12-09 11:44:49.351840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.280 qpair failed and we were unable to recover it. 00:29:57.280 [2024-12-09 11:44:49.352043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.352053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.352368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.352377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.352555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.352564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.352827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.352837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.353121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.353132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.353427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.353437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.353760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.353771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.353979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.353989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.354210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.354529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.354539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.354869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.354879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.355188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.355199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.355510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.355885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.355894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.356099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.356109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.356458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.356468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.356789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.356798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.357083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.357094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.357409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.357420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.357721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.357734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.357916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.357927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.358224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.358234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.358565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.358854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.358865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.359182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.359192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.359573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.359583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.359916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.359926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.360120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.360130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.360438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.360448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.360783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.360792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.361052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.361063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.361379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.361389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.361597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.361606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.361885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.361895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.281 qpair failed and we were unable to recover it. 00:29:57.281 [2024-12-09 11:44:49.362096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.281 [2024-12-09 11:44:49.362106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.362457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.362764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.362774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.363108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.363118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.363364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.363374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.363697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.363708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.364054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.364065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.364382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.364391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.364676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.364695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.364978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.364988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.365183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.365194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.365499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.365508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.365775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.365787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.366077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.366087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.366396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.366406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.366690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.366700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.367006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.367022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.367296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.367305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.367606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.367906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.367916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.368210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.368452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.368463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.368798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.369125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.369135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.369411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.369421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.369757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.369767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.370049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.370059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.370270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.370281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.370608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.370620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.370903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.370913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.371177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.282 [2024-12-09 11:44:49.371194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.371204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 [2024-12-09 11:44:49.371205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.371213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.282 [2024-12-09 11:44:49.371220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.282 [2024-12-09 11:44:49.371226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.282 [2024-12-09 11:44:49.371504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.371515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.371847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.371857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.372039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.372049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.372335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.372345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.372638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.372648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.372800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.282 [2024-12-09 11:44:49.372954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.372965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.282 qpair failed and we were unable to recover it. 00:29:57.282 [2024-12-09 11:44:49.372956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.282 [2024-12-09 11:44:49.373171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.282 [2024-12-09 11:44:49.373079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:57.283 [2024-12-09 11:44:49.373181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.373284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.283 [2024-12-09 11:44:49.373481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.373491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.373734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.373744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.373973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.373983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.374336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.374346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.374657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.374667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.374985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.374995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.375312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.375323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.375622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.375632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.375946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.375956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.376353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.376364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.376677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.376688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.377028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.377039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.377360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.377370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.377662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.377673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.377901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.377910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.378115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.378126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.378450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.378460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.378797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.378807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.379120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.379293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.379303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.379587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.379598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.379898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.379908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.380269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.380279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.380425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.380435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.380747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.380765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.381107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.381120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.381265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.381274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.381366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.381375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.381669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.381680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.381991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.382000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.382198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.382208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.382543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.382553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.382748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.382758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.383055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.383065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.383437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.383447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.383748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.383758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.383962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.383972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.384170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.283 [2024-12-09 11:44:49.384180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.283 qpair failed and we were unable to recover it. 00:29:57.283 [2024-12-09 11:44:49.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.384394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.384725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.385064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.385074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.385256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.385265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.385490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.385501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.385689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.385699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.385974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.385984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.386160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.386171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.386481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.386491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.386827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.386837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.386918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.386929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.387225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.387235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.387520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.387531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.387869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.387879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.388185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.388195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.388385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.388727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.389056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.389066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.389267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.389276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.389523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.389717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.389727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.389800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.389808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.390134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.390144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.390431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.390440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.390758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.390767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.391054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.391064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.391361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.391372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.391649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.391842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.391853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.392062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.392073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.392388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.392398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.392785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.393096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.393106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.393270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.393280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.393505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.393848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.393858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.394044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.394054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.394413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.394423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.394705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.284 [2024-12-09 11:44:49.394715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.284 qpair failed and we were unable to recover it. 00:29:57.284 [2024-12-09 11:44:49.395017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.395027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.395384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.395394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.395583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.395593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.395832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.395899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.395910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.396265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.396276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.396570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.396580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.396744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.396754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.397026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.397370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.397380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.397556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.397565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.397851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.397861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.398069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.398269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.398279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.398591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.398600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.398929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.398939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.399269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.399642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.399652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.399845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.399855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.400191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.400203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.400361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.400371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.400700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.400711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.401071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.401082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.401288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.401298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.401349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.401358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.401619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.401946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.401956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.402247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.402257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.402546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.402555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-12-09 11:44:49.402738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.285 [2024-12-09 11:44:49.402748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.402933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.402944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.403264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.403276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.403474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.403484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.403676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.403869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.403878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.404275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.404286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.404465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.565 [2024-12-09 11:44:49.404475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.565 qpair failed and we were unable to recover it. 00:29:57.565 [2024-12-09 11:44:49.404835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.404844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.405032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.405042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.405330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.405340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.405639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.405950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.405960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.406264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.406274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.406657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.406670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.406990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.407001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.407337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.407347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.407632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.407642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.407829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.407839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.408033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.408043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.408210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.408221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.408418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.408428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.408745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.408756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.409082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.409092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.409270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.409549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.409560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.409895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.409905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.410082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.410092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.410334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.410344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.410680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.410690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.410985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.410995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.411297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.411307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.411493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.411503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.411836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.412164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.412174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.412318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.412328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.412644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.412653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.412963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.412972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.413150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.413161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.413324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.413334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.413657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.413666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.413986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.413999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.414141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.414152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.414354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.414659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.414669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.414996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.415006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.415205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.566 [2024-12-09 11:44:49.415215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.566 qpair failed and we were unable to recover it. 00:29:57.566 [2024-12-09 11:44:49.415549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.415559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.415857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.415873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.416225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.416236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.416451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.416460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.416653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.416662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.416999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.417009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.417177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.417186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.417496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.417506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.417850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.417861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.418086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.418096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.418399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.418409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.418769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.418778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.419063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.419073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.419358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.419367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.419674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.419683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.420005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.420019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.420214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.420224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.420574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.420880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.420890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.421079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.421090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.421292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.421301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.421599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.421608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.421790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.421800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.422120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.422130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.422318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.422329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.422671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.422681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.422966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.422982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.423278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.423288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.423665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.423675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.423967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.423976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.424273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.424283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.424466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.424477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.424857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.425065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.425076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.425241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.425251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.425562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.425574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.425736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.425745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.426067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.426077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.426409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.426419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.567 qpair failed and we were unable to recover it. 00:29:57.567 [2024-12-09 11:44:49.426614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.567 [2024-12-09 11:44:49.426625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.426955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.426965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.427254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.427264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.427551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.427568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.427901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.427911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.428229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.428239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.428536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.428546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.428925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.428935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.429242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.429253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.429581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.429590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.429947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.429957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.430293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.430303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.430618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.430628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.430941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.430951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.431253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.431263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.431572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.431582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.431977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.431986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.432383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.432573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.433066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.433083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.433272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.433283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.433641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.433651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.433893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.434095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.434109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.434288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.434359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.434370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.434695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.434705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.434906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.434916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.435105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.435116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.435417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.435426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.435756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.436047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.436510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.436520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.436714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.437128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.437344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.437354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.437589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.437599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.437884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.438054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.568 [2024-12-09 11:44:49.438065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.568 qpair failed and we were unable to recover it. 00:29:57.568 [2024-12-09 11:44:49.438251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.438261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.438552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.438562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.438888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.439080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.439090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.439320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.439337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.439510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.439519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.439819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.439829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.440175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.440186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.440402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.440412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.440605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.440616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.440801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.440813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.440995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.441387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.441398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.441681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.441691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.441995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.442004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.442237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.442247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.442635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.442645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.442820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.442829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.443140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.443151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.443453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.443462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.443762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.443772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.444086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.444097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.444346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.444356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.444668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.444677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.444973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.444984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.445183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.445193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.445362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.445371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.445553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.445563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.445894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.445904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.446066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.446077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.446298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.446672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.446682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.446983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.446993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.447188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.447199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.447564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.447573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.447906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.447917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.447999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.448009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.448115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189d030 is same with the state(6) to be set 00:29:57.569 [2024-12-09 11:44:49.448683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.569 [2024-12-09 11:44:49.448773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.569 qpair failed and we were unable to recover it. 00:29:57.569 [2024-12-09 11:44:49.449019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.449038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.449418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.449806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.449814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.450020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.450029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.450466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.450808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.450817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.451972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.451980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.452297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.452307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.452656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.452663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.452947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.452953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.453327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.453334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.453505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.453513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.453813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.453820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.454075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.454083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.454264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.454271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.454617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.454623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.454841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.454848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.455033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.455040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.455321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.455328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.455673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.455680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.455873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.456146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.456153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.456373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.456381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.456774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.456939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.456946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.457218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.457226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.457431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.457438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.457613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.457619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.457807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.457815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.570 qpair failed and we were unable to recover it. 00:29:57.570 [2024-12-09 11:44:49.458116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.570 [2024-12-09 11:44:49.458124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.458335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.458342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.458624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.458631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.458800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.458807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.459065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.459072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.459404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.459411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.459826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.459833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.459998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.460006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.460242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.460250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.460445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.460452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.460769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.460776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.461034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.461042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.461358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.461365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.461733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.461739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.462063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.462248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.462256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.462425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.462432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.462603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.462739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.462830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.463264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.463355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.463573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.463582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.463904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.463912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.464083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.464091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.464336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.464343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.464406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.464413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.464703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.464710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.464924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.465166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.465173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.465492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.465499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.465797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.466132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.466139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.466463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.466471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.466682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.466690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.466994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.467001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.467387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.467394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.467580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.467588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.467968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.467976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.468159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.468371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.571 [2024-12-09 11:44:49.468377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.571 qpair failed and we were unable to recover it. 00:29:57.571 [2024-12-09 11:44:49.468657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.468665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.469002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.469009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.469258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.469266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.469609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.469616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.469956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.469963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.470268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.470276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.470322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.470329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.470511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.470519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.470886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.470894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.471057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.471064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.471240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.471247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.471404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.471411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.471764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.471770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.472059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.472066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.472254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.472261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.472599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.472606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.472769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.472776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.472868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.472875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.473200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.473208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.473539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.473548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.473713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.473720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.474046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.474464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.474797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.474804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.475139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.475146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.475463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.475469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.475650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.475657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.475993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.476000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.476310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.476318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.476487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.476495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.476787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.476972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.476979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.477291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.477298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.477628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.477635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.477848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.477855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.478079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.478086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.478399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.478407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.478722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.478729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.478764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.572 [2024-12-09 11:44:49.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.572 qpair failed and we were unable to recover it. 00:29:57.572 [2024-12-09 11:44:49.478938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.478945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.479261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.479268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.479556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.479617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.479937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.479944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.480151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.480158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.480317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.480324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.480386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.480393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.480745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.480753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.480903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.481220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.481229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.481444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.481452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.481885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.481893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.482200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.482209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.482525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.482533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.482717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.482725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.482887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.482895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.483186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.483193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.483524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.483532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.483852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.483859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.483923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.483930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.484153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.484162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.484504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.484825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.485153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.485159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.485484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.485491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.485828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.485834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.486045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.486056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.486410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.486418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.486722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.486737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.486892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.486900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.487062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.487070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.487247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.487254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.487580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.487587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.487799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.487806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.488176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.488183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.488479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.488485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.488785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.488791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.488992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.489213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.573 [2024-12-09 11:44:49.489220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.573 qpair failed and we were unable to recover it. 00:29:57.573 [2024-12-09 11:44:49.489520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.489527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.489848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.489855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.490185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.490192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.490510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.490517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.490832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.490839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.491001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.491007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.491278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.491286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.491618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.491625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.491819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.491826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.492191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.492198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.492606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.492613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.492813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.492820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.493186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.493523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.493530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.493575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.493734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.493742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.494086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.494094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.494412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.494419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.494729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.494736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.495080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.495406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.495721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.495730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.496047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.496055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.496225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.496232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.496525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.496532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.496886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.496893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.497210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.497218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.497511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.497518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.497807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.574 [2024-12-09 11:44:49.497823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.574 qpair failed and we were unable to recover it. 00:29:57.574 [2024-12-09 11:44:49.498024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.498031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.498344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.498351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.498700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.498707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.499046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.499054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.499224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.499230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.499533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.499540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.499751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.500082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.500089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.500380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.500395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.500445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.500593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.500775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.501132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.501140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.501316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.501324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.501637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.501644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.501680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.501687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.501965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.501972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.502053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.502060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.502220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.502227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.502286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.502294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.502608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.502930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.502937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.503243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.503251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.503560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.503567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.503732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.503739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.503897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.503903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.504074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.504081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.504412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.504418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.504658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.504665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.505002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.505009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.505179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.505186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.505634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.505641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.505944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.505952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.506273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.506280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.506596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.506603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.506673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.506679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.506832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.506840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.575 [2024-12-09 11:44:49.507167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.575 [2024-12-09 11:44:49.507174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.575 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.507337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.507344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.507523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.507530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.507751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.507758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.508091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.508098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.508350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.508357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.508539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.508546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.508620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.508627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.508923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.508930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.509256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.509448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.509456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.509780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.509787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.509969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.509977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.510202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.510209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.510405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.510412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.510704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.510712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.511062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.511069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.511316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.511322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.511492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.511499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.511666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.511673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.511959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.512150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.512157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.512335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.512344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.512646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.512653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.512827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.512834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.513025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.513236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.513243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.513394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.513401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.513725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.513733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.514051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.514058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.514330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.514337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.514522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.514531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.514849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.514856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.515064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.515072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.515265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.515454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.576 [2024-12-09 11:44:49.515461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.576 qpair failed and we were unable to recover it. 00:29:57.576 [2024-12-09 11:44:49.515645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.515652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.515806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.515814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.516172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.516179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.516495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.516502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.516688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.516694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.517055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.517063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.517416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.517422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.517709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.517716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.518032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.518047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.518369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.518376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.518713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.518720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.518889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.518897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.519136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.519144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.519378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.519385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.519692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.519700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.520006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.520018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.520356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.520363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.520687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.520694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.521029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.521037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.521265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.521587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.521924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.521931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.522249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.522262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.522437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.522444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.522627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.522633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.522866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.522873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.523077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.523086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.523257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.523264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.523569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.523576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.523772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.523778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.524062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.524239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.524246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.524518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.524525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.524843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.577 qpair failed and we were unable to recover it. 00:29:57.577 [2024-12-09 11:44:49.525000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.577 [2024-12-09 11:44:49.525008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.525305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.525312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.525608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.525615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.525932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.525939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.525977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.525984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.526156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.526163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.526498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.526505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.526849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.526857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.527102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.527110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.527284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.527291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.527464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.527471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.527743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.527750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.527939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.527946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.528266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.528551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.528557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.528885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.528892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.529171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.529177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.529495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.529502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.529824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.529830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.530043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.530050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.530227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.530233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.530581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.530587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.530935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.530942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.531120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.531127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.531401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.531407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.531631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.531639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.531815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.531823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.532140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.532557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.532587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.532937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.532946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.533256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.533264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.533651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.533659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.533998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.534008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.534330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.534338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.534683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.534690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.534889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.535208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.535274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.535281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.578 [2024-12-09 11:44:49.535566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.578 [2024-12-09 11:44:49.535574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.578 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.535739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.535747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.535909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.535917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.536107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.536115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.536420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.536594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.536602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.536756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.536763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.537181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.537189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.537515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.537523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.537684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.537692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.537887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.537894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.538167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.538175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.538233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.538240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.538274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.538540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.538548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.538833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.539176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.539184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.539357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.539365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.539544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.539552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.539878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.540116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.540124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.540431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.540439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.540796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.540804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.540986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.540994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.541150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.541158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.541502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.541509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.541831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.541839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.542166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.542174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.542493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.542501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.542823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.542831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.542876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.542882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.579 [2024-12-09 11:44:49.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.579 [2024-12-09 11:44:49.543094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.579 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.543413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.543421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.543595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.543603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.543901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.543910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.544111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.544444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.544759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.544766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.544940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.544948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.545122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.545130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.545292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.545300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.545521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.545529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.545588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.545596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.545942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.545950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.546247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.546255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.546447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.546455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.546765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.546773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.547115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.547123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.547315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.547323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.547675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.547683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.547729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.547735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.548027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.548035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.548422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.548430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.548735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.548743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.548968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.548976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.549326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.549334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.549648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.549656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.549804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.549811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.550138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.550146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.550443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.550451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.550770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.550776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.551104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.551112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.551433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.551440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.551614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.551621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.551816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.551823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.552152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.552159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.552463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.552741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.552747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.552911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.552917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.553196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.580 [2024-12-09 11:44:49.553204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.580 qpair failed and we were unable to recover it. 00:29:57.580 [2024-12-09 11:44:49.553538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.553545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.553714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.553721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.553995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.554002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.554346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.554353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.554671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.554680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.554994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.555001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.555302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.555310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.555640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.555647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.555841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.555849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.556049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.556056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.556381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.556388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.556701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.556707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.556875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.556883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.557166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.557173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.557484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.557490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.557804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.557811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.557997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.558004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.558183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.558190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.558341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.558348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.558503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.558836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.559045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.559052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.559353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.559360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.559542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.559890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.559897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.559952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.560277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.560447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.560454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.560890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.561226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.561233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.561562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.561569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.561729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.561737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.562028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.562036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.562241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.562248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.562417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.581 [2024-12-09 11:44:49.562424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.581 qpair failed and we were unable to recover it. 00:29:57.581 [2024-12-09 11:44:49.562709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.562716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.563045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.563052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.563364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.563371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.563684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.563691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.564005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.564015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.564309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.564316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.564478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.564485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.564761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.564777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.565109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.565116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.565437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.565445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.565631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.565850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.565857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.566205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.566212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.566444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.566452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.566630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.566636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.566974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.566981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.567295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.567303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.567463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.567471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.567628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.567635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.567869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.567876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.568081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.568088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.568279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.568285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.568629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.568636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.568811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.568817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.569108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.569115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.569358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.569700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.569707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.569892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.569899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.570208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.570215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.570540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.570548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.570867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.570874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.571061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.571068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.571236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.571395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.571402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.571736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.571744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.572097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.572104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.572274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.572281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.572639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.582 [2024-12-09 11:44:49.572646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.582 qpair failed and we were unable to recover it. 00:29:57.582 [2024-12-09 11:44:49.572954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.572960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.573280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.573287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.573579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.573593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.573910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.573917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.574088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.574095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.574313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.574321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.574645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.574652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.574830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.574837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.575009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.575019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.575377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.575384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.575568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.575575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.575823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.575833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.576089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.576096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.576404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.576412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.576616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.576623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.576799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.576806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.577051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.577058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.577289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.577295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.577621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.577628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.577840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.577848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.578199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.578206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.578381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.578388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.578686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.578693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.578883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.578891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.579258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.579265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.579447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.579454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.579726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.579733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.580078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.580085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.580399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.580406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.580449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.580456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.580635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.580642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.580952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.581260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.581268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.581465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.581472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.581510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.581516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.581553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.581896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.582210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.582536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.582543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.583 qpair failed and we were unable to recover it. 00:29:57.583 [2024-12-09 11:44:49.582846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.583 [2024-12-09 11:44:49.582854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.583174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.583182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.583342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.583350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.583588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.583595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.583916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.583923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.584228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.584235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.584564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.584570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.584735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.584742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.585146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.585153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.585465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.585472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.585658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.585667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.586051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.586058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.586386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.586395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.586710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.586717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.587038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.587045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.587393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.587400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.587442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.587449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.587611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.587618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.587874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.587881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.588208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.588215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.588397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.588404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.588713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.588720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.588917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.588923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.589107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.589115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.589440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.589447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.589738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.589745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.589951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.589957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.590128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.590135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.590360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.590366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.590555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.590563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.590755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.590761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.584 [2024-12-09 11:44:49.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.584 [2024-12-09 11:44:49.591063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.584 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.591435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.591441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.591609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.591616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.591655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.591662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.591856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.591863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.592197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.592204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.592505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.592513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.592834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.592840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.593048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.593056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.593412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.593418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.593601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.593608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.593886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.593893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.594061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.594344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.594351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.594484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.594491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.594799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.594805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.594976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.594983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.595314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.595321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.595477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.595484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.595772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.595779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.595818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.595825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.595999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.596008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.596195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.596203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.596517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.596524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.596562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.596568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.596869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.596876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.597198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.597205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.597523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.597530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.597724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.597732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.597902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.597909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.597954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.597961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.598111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.598507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.598515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.598718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.598726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.598886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.598893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.599174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.599182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.599396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.599403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.599758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.599765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.600090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.600098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.585 [2024-12-09 11:44:49.600436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.585 [2024-12-09 11:44:49.600443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.585 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.600605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.600612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.600975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.600982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.601297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.601304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.601492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.601500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.601873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.601880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.602195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.602202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.602394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.602401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.602568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.602575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.602876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.602883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.603064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.603071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.603371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.603570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.603577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.603912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.603918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.604091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.604099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.604348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.604646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.604653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.604973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.604987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.605168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.605175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.605487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.605494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.605818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.605825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.605982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.606266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.606284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.606694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.606701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.606843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.606850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.607043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.607050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.607132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.607140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.607534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.607541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.607819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.607826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.608210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.608217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.608421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.608427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.608508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.608515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.608829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.608837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.609044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.609051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.609337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.609343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.609679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.609687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.586 [2024-12-09 11:44:49.609994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.586 [2024-12-09 11:44:49.610002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.586 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.610303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.610310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.610603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.610616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.610977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.610984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.611296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.611303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.611734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.611742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.611932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.611940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.612243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.612250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.612449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.612456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.612623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.612630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.612872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.612880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.613198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.613205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.613581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.613588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.613891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.613899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.614202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.614210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.614531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.614539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.614888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.614895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.615179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.615186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.615352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.615359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.615638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.615645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.615836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.615843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.616021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.616028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.616071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.616079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.616450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.616457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.616767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.616774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.617169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.617176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.617501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.617510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.617694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.617702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.617881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.617887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.618057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.618064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.618370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.618377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.587 [2024-12-09 11:44:49.618709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-12-09 11:44:49.619023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.619031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.619401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.619409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.619801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.619808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.620124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.620131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.620469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.620476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.620819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.621177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.621185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.621357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.621363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.621674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.621681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.621954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.621960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.622992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.622998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.623321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.623329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.623638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.623645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.623975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.624200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.624208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.624398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.624405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.624713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.624721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.624917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.625094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.625101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.625413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.625421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.625754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.625761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.625932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.625940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.626318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.626326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.626505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.626512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.626807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.626824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.627060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.627068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.627444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.627643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.627650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.628019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.628026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.628408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.628418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.628735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.628743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.629071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.629078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.629290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.629297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.629628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.629635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.629839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.630169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.630177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.630498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.630812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.630819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.631147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.631155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.631446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.631454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.631615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.631623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.631893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.631900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.632228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.632235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.632561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.632900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.632907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.633230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.588 [2024-12-09 11:44:49.633238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-12-09 11:44:49.633446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.633453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.633767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.633774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.633941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.633949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.634271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.634279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.634498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.634505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.634717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.634910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.634917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.635104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.635111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.635305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.635312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.635631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.635638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.635934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.635942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.636285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.636614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.636621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.636787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.636794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.637127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.637134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.637455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.637463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.637508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.637515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.637818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.637825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.637990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.637997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.638280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.638288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.638628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.639018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.639025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.639165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.639171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.639609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.639699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.640244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.640335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.640621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.640658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.641023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.641198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.641347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.641662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.641819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.641998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.642005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.642334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.642537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.642544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.642786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.642793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.643074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.643081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.643457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.643464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.643811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.643818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.643979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.643987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.644348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.644355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.644547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.644554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.644764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.644771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.645097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.645104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.645395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.645402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.645575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.645583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.645920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.645927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.646239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.646246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.646535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.646543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.646869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.647184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.647191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.647378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.647386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.647554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.647561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.647793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.647800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.648133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.648141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.648470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.648476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.648777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.648784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.649097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.649104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.649143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.649149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.649431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.649438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.649759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.649766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.650089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.650096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.650455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.650463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.650503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.650510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.650652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.650659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.650878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.650885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.651202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.651209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.651554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.651560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.651722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.651729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.652007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.652017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.652326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.652333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.652648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.652655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.652867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-09 11:44:49.652873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-12-09 11:44:49.653209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.653216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.653410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.653416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.653693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.653700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.654028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.654035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.654192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.654199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.654531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.654539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.654865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.654872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.655037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.655044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.655275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.655281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.655584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.655592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.655906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.655914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.656122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.656129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.656326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.656332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.656673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.656680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.657003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.657333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.657340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.657630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.657645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.657949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.658049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.658057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.658339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.658456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.658462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.658812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.658819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.659114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.659121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.659443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.659450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.659581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.659588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.659863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.659871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.660271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.660278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.660577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.660893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.660900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.661114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.661121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.661399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.661713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.661720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.662042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.662049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.662399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.662405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.662577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.662584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.662860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.662867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.663057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.663064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.663344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.663351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.663664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.663671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.663830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.663837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.664227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.664233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.664558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.664565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.664919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.664925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.665239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.665246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.665443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.665450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.665806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.665813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.666021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.666029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.666206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.666214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.666595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.666601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.666763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.666770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.667067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.667076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.667486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.667492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.667655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.667662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.668002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.668009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.668328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.668335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.668615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.668621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.668894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.668901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.669086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.669093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.669410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.669418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.669795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.669802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.670086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.670093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.670275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.670282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.670435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.670442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.670685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.670692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.671026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.671033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.671214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.671221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.671266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.671272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.671619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.671626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.671941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.671948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.672280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.672288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.672607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.672614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.672937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.672945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.590 qpair failed and we were unable to recover it. 00:29:57.590 [2024-12-09 11:44:49.673312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-09 11:44:49.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.673632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.673639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.673866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.673873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.674190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.674371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.674379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.674697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.674853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.674861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.674898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.674905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.675193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.675200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.675527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.675534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.675844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.675851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.676191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.676198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.676503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.676515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.676876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.676882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.677176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.677183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.677497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.677503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.677628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.677634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.677779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.677786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.678086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.678093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.678271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.678278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.678606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.678684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.678690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.678879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.678886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.679210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.679217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.679382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.679389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.679721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.679727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.680035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.680044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.680207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.680214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.680399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.680406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.680661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.680850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.680858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.681989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.682211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.682217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.682337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.682598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.682606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.682925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.682932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.683249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.683257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.683598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.683926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.683933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.684238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.684245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.684548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.684555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.684728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.684736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.685023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.685031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.685250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.685257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.685594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.685601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.686290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.686296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.686512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.686518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.686797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.686805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.687125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.687132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.687478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.687767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.687774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.688952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.688959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.689212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.689220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.689385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.689553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.689561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.689949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.689958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.591 [2024-12-09 11:44:49.690353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-09 11:44:49.690361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.591 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.690496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.690504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.690807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.690814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.691133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.691140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.691455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.691461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.691665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.691672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.691983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.691990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.692059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.692066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.692241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.692247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.692435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.692442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.692500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.692793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.692801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.693120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.693127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.693447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.693454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.693775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.693782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.694105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.694114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.694412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.694419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.694739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.694746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.694939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.694946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.695138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.695450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.695456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.695711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.695718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.696050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.696241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.696248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.696428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.696435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.696632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.696639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.696966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.696973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.697265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.697273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.697442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.697448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.697639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.697646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.697801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.697809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.698142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.698149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.698332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.698339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.698584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.698816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.698823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.699132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.699139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.699457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.699464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.699643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.699650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.700053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.700060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.700274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.700283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.700625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.700632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.700956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.700963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.701265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.701272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.701446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.701453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.701638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.701645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.701815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.701824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.702018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.702027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.702362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.702370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.702688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.702697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.702999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.703006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.703302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.703316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.703915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.703923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.703972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.703979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.704199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.704207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.704391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.704398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.704573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.704580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.704859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.704866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.705261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.705268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.705607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.705614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.705911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.705918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.706250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.706257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.706577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.706584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.706902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.706909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.707208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.707216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.592 [2024-12-09 11:44:49.707554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.592 [2024-12-09 11:44:49.707561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.592 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.707684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.707692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.707931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.707938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.708250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.708259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.708591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.708599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.708952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.708959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.709248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.709262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.709597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.709605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.709761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.709769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.710082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.710380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.710389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.710548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.710556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.710952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.710959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.711258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.711265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.711429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.711438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.711716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.711723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.711940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.711947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.712247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.712255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.712586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.712593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.712633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.712640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.712958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.712966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.713296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.713303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.713619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.714268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.714276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.714635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.714643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.714791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.714799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.714986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.714994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.715282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.715290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.715619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.715627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.715671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.715966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.715974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.872 qpair failed and we were unable to recover it. 00:29:57.872 [2024-12-09 11:44:49.716136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-12-09 11:44:49.716144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.716451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.716459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.716778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.716786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.717103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.717110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.717408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.717422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.717617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.717624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.717796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.717804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.718027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.718034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.718363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.718370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.718596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.718603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.718924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.718931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.719242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.719249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.719553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.719560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.719781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.719788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.719967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.719974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.720173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.720181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.720347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.720354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.720655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.720661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.720846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.720853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.721198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.721386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.721394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.721574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.721581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.721806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.721815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.722038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.722045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.722106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.722112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.722516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.722524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.722683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.722690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.722918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.722926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.723095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.723101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.723280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.723287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.723630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.723637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.723933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.723941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.724160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.724169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.724507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.724515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.724831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.724838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.725162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.725169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.725376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.725383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.725694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-12-09 11:44:49.725701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.873 qpair failed and we were unable to recover it. 00:29:57.873 [2024-12-09 11:44:49.726021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.726028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.726346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.726357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.726669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.726992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.726999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.727234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.727241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.727428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.727436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.727745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.727751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.727922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.727929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.728141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.728148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.728306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.728485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.728493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.728804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.728810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.729116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.729123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.729317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.729323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.729640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.730018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.730025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.730365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.730372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.730532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.730539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.730715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.730722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.731020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.731027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.731378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.731384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.731571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.731578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.732074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.732226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.732232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.732534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.732543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.732886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.732894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.733080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.733087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.733417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.733734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.733742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.733917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.733925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.734268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.734275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.734486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.734494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.734728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.734735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.735055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.735062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.735391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.735398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.735570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.735577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.735919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.735927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.736273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.874 qpair failed and we were unable to recover it. 00:29:57.874 [2024-12-09 11:44:49.736595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.874 [2024-12-09 11:44:49.736602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.736915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.736922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.737250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.737258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.737452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.737460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.737785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.738188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.738195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.738505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.738512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.738830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.738837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.738915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.739104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.739111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.739456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.739463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.739632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.739639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.739917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.739924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.740256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.740525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.740532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.740723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.740732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.741031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.741038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.741343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.741350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.741541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.741549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.741771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.741778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.741948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.742282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.742289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.742450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.742457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.742689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.742696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.742749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.742932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.743249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.743258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.743340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.743634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.743642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.743800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.743808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.744126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.744133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.744502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.744509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.744842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.744849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.745179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.745186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.745416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.745423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.745747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.745754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.746056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.746063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.746398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.746405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.875 [2024-12-09 11:44:49.746615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.875 [2024-12-09 11:44:49.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.875 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.746733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.747064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.747071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.747274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.747440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.747448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.747649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.747657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.747994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.748000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.748336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.748659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.748666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.748994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.749001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.749049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.749056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.749354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.749360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.749689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.749724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.749730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.750027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.750034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.750081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.750087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.750127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.750133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.750546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.750631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0024000b90 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.750849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.750881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.751221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.751235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.751415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.751426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.751640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.751650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.751888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.751899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.752224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.752235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.752644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.752655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.752969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.752980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.753028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.753038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.753364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.753375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.754058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.754071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.754265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.754275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.754352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.754361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.876 qpair failed and we were unable to recover it. 00:29:57.876 [2024-12-09 11:44:49.754722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.876 [2024-12-09 11:44:49.754734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.754959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.754969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.755046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.755056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.755353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.755365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.755585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.755596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.755768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.755778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.755965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.755975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.756305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.756317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.756497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.756506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.756833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.756843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.757134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.757148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.757371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.757380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.757665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.757846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.757856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.758040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.758051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.758244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.758255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.758574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.758584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.758896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.759064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.759235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.759245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.759458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.759468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.759757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.759768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.759944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.759953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.760254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.760270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.760445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.760455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.760750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.760760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.760943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.761256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.761267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.761662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.761675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.761979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.761990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.762181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.762192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.762493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.762505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.762647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.762657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.762847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.762857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.763203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.763501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.763511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.763824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.763834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.764211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.764228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.764402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.764411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.877 qpair failed and we were unable to recover it. 00:29:57.877 [2024-12-09 11:44:49.764744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.877 [2024-12-09 11:44:49.764755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.764939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.764949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.765156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.765171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.765467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.765477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.765809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.765819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.766195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.766206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.766335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.766629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.766639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.766987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.767303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.767315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.767655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.767665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.767959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.767973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.768305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.768316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.768493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.768503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.768817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.768827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.769009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.769025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.769241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.769251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.769542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.769552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.769739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.769749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.769921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.769931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.770231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.770242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.770300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.770309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.770586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.770595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.770907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.770918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.771094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.771106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.771327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.771338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.771503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.771513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.771814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.771825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.772020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.772032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.772225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.772235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.772451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.772462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.772651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.772661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.772990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.773000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.773149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.773159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.773439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.773449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.773665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.773674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.773995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.774004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.774323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.774333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.878 [2024-12-09 11:44:49.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.878 [2024-12-09 11:44:49.774635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.878 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.774801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.774810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.775055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.775065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.775285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.775294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.775572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.775581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.775900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.775911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.776211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.776221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.776527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.776537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.776859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.776869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.777027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.777038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.777139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.777150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.777316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.777609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.777618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.777793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.777803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.778067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.778077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.778385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.778396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.778683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.778693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.779006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.779400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.779409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.779704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.779714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.779882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.779892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.780223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.780234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.780525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.780534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.780844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.780854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.781172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.781182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.781470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.781481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.781801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.782107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.782117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.782290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.782300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.782640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.782650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.782942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.782952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.783178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.783188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.783517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.783527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.783892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.783902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.784204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.784214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.784536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.784546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.784863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.784873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.785201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.785212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.785519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.785529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.879 qpair failed and we were unable to recover it. 00:29:57.879 [2024-12-09 11:44:49.785703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.879 [2024-12-09 11:44:49.785712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.785907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.785918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.786207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.786218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.786394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.786405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.786453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.786764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.786774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.786936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.786946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.787221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.787231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.787568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.787760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.787771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.788149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.788159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.788460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.788471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.788806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.788815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.789093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.789103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.789410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.789420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.789614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.789624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.789809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.789821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.790137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.790147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.790422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.790432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.790825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.790836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.791024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.791328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.791337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.791584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.791594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.791895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.792227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.792237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.792419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.792429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.792636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.792646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.792924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.792933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.793108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.793118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.793286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.793296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.793545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.793554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.793883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.793894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.794068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.794079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.794119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.794129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.794426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.794437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.794630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.794640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.794947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.794958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.795260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.880 [2024-12-09 11:44:49.795271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.880 qpair failed and we were unable to recover it. 00:29:57.880 [2024-12-09 11:44:49.795583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.795593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.795904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.795915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.796103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.796114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.796300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.796311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.796938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.796951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.797275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.797286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.797411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.797422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.797691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.797702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.798024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.798037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.798203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.798571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.798581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.798878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.798888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.799085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.799095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.799393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.799590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.799600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.799916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.799926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.800113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.800424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.800724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.800735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.801019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.801028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.801068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.801077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.801420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.801430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.801744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.801753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.802042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.802052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.802377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.802386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.802686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.802696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.802876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.802886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.803128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.803138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.803315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.803324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.803800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.803890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.804128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.804167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.804508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.804560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.881 [2024-12-09 11:44:49.804873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.881 qpair failed and we were unable to recover it. 00:29:57.881 [2024-12-09 11:44:49.805068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.805079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.805264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.805273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.805664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.805674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.806005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.806017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.806203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.806533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.806542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.806831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.806840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.807160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.807465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.807475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.807812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.807821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.807988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.807998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.808102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.808112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.808456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.808466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.808647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.808657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.808845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.808855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.809147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.809157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.809345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.809355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.809593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.809823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.809833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.810050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.810060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.810363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.810372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.810698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.811059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.811069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.811286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.811296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.811491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.811501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.811839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.811849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.812138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.812148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.812219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.812228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.812533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.812543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.812854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.812865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.813178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.813189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.813496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.813505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.813706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.813716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.814083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.814397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.814413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.814720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.814889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.814899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.815126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.882 [2024-12-09 11:44:49.815137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.882 qpair failed and we were unable to recover it. 00:29:57.882 [2024-12-09 11:44:49.815531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.815540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.816102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.816111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.816274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.816445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.816648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.816657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.816838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.816849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.817088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.817098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.817452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.817462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.817808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.817818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.818133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.818143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.818429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.818439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.818746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.818755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.819065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.819429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.819438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.819634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.819643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.819840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.819850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.820166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.820176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.820340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.820350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.820703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.820713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.821016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.821026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.821189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.821579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.821588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.821739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.821749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.821923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.821933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.822120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.822130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.822301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.822311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0490 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.822711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.822740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.823236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.823268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.823615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.823625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.823781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.823789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.824259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.824288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.824514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.824522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.824847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.824853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.825167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.825174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.825360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.825366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.825708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.825715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.826035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.826042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.883 qpair failed and we were unable to recover it. 00:29:57.883 [2024-12-09 11:44:49.826239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.883 [2024-12-09 11:44:49.826246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.826413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.826421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.826748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.826928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.826935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.827192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.827201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.827387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.827393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.827672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.827679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.828029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.828036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.828361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.828368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.828684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.828690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.829016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.829201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.829208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.829492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.829843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.829850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.830167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.830174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.830345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.830352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.830504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.830512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.830713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.830720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.830914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.830921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.831080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.831087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.831300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.831307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.831641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.831648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.831967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.831974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.832290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.832297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.832586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.832593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.832912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.832919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.833228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.833235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.833563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.833570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.833608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.833614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.833764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.833771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.833994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.834003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.834355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.834362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.834666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.834673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.834990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.834996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.835216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.835223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.835395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.835408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.835564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.835571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.835847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.835854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.884 [2024-12-09 11:44:49.836060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.884 [2024-12-09 11:44:49.836068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.884 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.836272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.836436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.836780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.837086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.837093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.837431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.837437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.837600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.837607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.837769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.837979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.838163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.838171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.838383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.838389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.838541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.838548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.838712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.838720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.839004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.839017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.839299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.839306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.839618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.839625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.839925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.839931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.839969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.839975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.840243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.840251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.840292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.840299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.840578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.840585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.840771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.840777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.840934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.840942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.841219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.841226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.841430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.841437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.841646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.841653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.841694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.841700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.842027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.842035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.842312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.842318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.842609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.842616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.843001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.843008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.843184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.843191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.843607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.843615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.843911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.843919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.844245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.844252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.885 [2024-12-09 11:44:49.844586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.885 [2024-12-09 11:44:49.844593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.885 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.844931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.844937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.845245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.845252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.845563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.845570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.845864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.845871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.846170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.846177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.846403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.846410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.846592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.846599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.846821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.846828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.846989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.846996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.847283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.847290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.847614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.847621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.847917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.847924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.848243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.848250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.848551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.848773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.848780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.848953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.848960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.849141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.849149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.849412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.849726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.850032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.850039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.850359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.850367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.850406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.850414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.850705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.850712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.851033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.851040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.851366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.851373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.851667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.851674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.852001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.852309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.852317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.852483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.852644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.852650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.852830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.852837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.853163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.853170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.853486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.853493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.853806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.853813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.854002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.854009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.854194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.854201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.854558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.854569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.854740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.886 [2024-12-09 11:44:49.854748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.886 qpair failed and we were unable to recover it. 00:29:57.886 [2024-12-09 11:44:49.855058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.855065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.855394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.855401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.855710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.855717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.855867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.855874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.856157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.856164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.856479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.856486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.856802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.856987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.856994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.857316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.857324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.857364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.857370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.857658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.857971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.857978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.858292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.858299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.858595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.858602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.858892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.858898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.859070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.859077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.859474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.859481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.859774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.859782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.860096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.860104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.860283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.860292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.860502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.860510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.860826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.860833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.861130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.861137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.861452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.861459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.861617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.861624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.861826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.861833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.862128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.862135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.862463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.862470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.862636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.862769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.863069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.863076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.863341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.863348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.863693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.863699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.863902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.863909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.864122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.864129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.864429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.864737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.887 qpair failed and we were unable to recover it. 00:29:57.887 [2024-12-09 11:44:49.865056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.887 [2024-12-09 11:44:49.865063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.865243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.865252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.865600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.865607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.865928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.865935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.866212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.866219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.866538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.866544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.866870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.866877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.867184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.867191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.867365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.867373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.867734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.867741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.868083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.868091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.868266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.868275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.868470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.868477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.868709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.868716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.869079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.869086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.869258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.869265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.869452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.869460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.869762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.870078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.870085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.870255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.870262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.870580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.870587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.870758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.870765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.871962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.871968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.872204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.872212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.872524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.872531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.872685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.872691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.872982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.872989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.873405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.873701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.873708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.874026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.874033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.874397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.874404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.874736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.874743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.875014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.875021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.875263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.888 [2024-12-09 11:44:49.875270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.888 qpair failed and we were unable to recover it. 00:29:57.888 [2024-12-09 11:44:49.875435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.875764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.875930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.875940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.876139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.876146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.876327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.876334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.876729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.876736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.877045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.877052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.877288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.877295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.877618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.877625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.877967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.877974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.878301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.878309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.878476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.878713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.878720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.879057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.879064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.879243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.879250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.879658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.879665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.879979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.879987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.880302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.880309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.880475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.880482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.880856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.880864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.881085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.881094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.881406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.881413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.881733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.882049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.882057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.882223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.882230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.882433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.882440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.882759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.882766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.883122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.883130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.883461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.883468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.883680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.883687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.884020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.884027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.884352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.884359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.884758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.884764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.885074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.885262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.885268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.885465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.885472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.885716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.885724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.886031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.889 [2024-12-09 11:44:49.886330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.889 [2024-12-09 11:44:49.886337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.889 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.886520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.886527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.886679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.886685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.887018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.887026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.887245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.887254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.887497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.887504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.887695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.887701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.887919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.888219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.888226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.888531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.888538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.888692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.888699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.888889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.888895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.889079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.889086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.889370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.889378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.889540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.889549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.889716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.889723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.889988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.889995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.890299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.890306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.890635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.890642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.890941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.890949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.891255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.891262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.891669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.891675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.891981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.891988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.892071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.892078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.892296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.892303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.892625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.892632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.892798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.892805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.893130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.893137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.893369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.893570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.893577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.893875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.893882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.894084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.894091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.894159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.894165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.894345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.890 [2024-12-09 11:44:49.894352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.890 qpair failed and we were unable to recover it. 00:29:57.890 [2024-12-09 11:44:49.894680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.894686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.894835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.894842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.894922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.894928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.895227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.895235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.895535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.895857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.895871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.896183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.896190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.896437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.896444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.896700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.896707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.897017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.897024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.897354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.897362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.897650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.897657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.897981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.898358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.898365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.898648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.898655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.898947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.898954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.899251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.899259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.899412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.899420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.899705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.899712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.900064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.900071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.900241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.900247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.900412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.900419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.900715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.900722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.900895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.900902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.901177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.901185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.901541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.901547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.901835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.901842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.902000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.902008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.902094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.902102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.902342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.902349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.902653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.902817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.902823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.903202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.903209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.903391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.903399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.903694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.903700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.904037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.904044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.904446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.904453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.904761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.904769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.891 [2024-12-09 11:44:49.905086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.891 [2024-12-09 11:44:49.905094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.891 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.905390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.905398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.905711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.905718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.906037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.906044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.906369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.906688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.906700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.907000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.907007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.907313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.907507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.907514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.907549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.907555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.907839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.907846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.908019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.908027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.908320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.908329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.908368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.908374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.908691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.908698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.908994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.909001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.909319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.909327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.909516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.909523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.909849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.909856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.909895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.909901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.910138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.910146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.910456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.910462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.910759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.910766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.910925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.910933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.911212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.911219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.911497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.911504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.911804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.911811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.912140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.912148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.912480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.912487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.912699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.913005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.913306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.913313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.913737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.913745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.914057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.914065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.914109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.914117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.914294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.914301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.914335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.914342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.914716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.914723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.915019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.892 [2024-12-09 11:44:49.915026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.892 qpair failed and we were unable to recover it. 00:29:57.892 [2024-12-09 11:44:49.915341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.915348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.915539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.915546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.915865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.915872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.916176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.916183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.916352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.916360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.916639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.916645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.916936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.916943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.917287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.917294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.917606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.917613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.917770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.917777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.918062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.918069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.918387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.918394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.918746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.918754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.919076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.919086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.919406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.919412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.919709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.919887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.919894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.920169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.920176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.920488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.920495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.921108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.921115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.921447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.921453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.921599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.921606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.921906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.921913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.922255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.922571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.922578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.922868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.922876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.923210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.923217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.923408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.923660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.923667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.923971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.923978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.924273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.924281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.924449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.924455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.924732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.924739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.925033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.925040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.925334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.925342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.925510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.925517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.925829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.925836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.893 [2024-12-09 11:44:49.926062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.893 [2024-12-09 11:44:49.926070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.893 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.926233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.926239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.926604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.926612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.926782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.927068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.927075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.927474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.927481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.927798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.927804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.928124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.928131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.928379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.928386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.928719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.928725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.928891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.928899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.929215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.929222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.929532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.929861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.929868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.930077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.930084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.930273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.930281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.930460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.930468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.930788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.930794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.930987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.930994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.931295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.931303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.931497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.931505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.931862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.931869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.932081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.932320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.932327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.932489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.932496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.932663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.932744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.932751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.933098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.933105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.933424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.933431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.933475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.933482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.933789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.933796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.934001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.934009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.934233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.934240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.934532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.934539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.934761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.934768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-12-09 11:44:49.935067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-12-09 11:44:49.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.935259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.935267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.935449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.935456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.935790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.935797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.936121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.936130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.936432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.936438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.936728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.936735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.937027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.937036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.937081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.937088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.937240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.937248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.937590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.937596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.937899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.937906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.938071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.938078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.938390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.938396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.938693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.938699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.938864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.938872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.939071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.939079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.939313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.939320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.939632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.939639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.939804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.939812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.939987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.939993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.940243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.940479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.940494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.940710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.940718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.941043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.941050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.941480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.941487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.941779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.941786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.942103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.942110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.942408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.942415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.942598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.942605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.942986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.942994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.943311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.943319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.943636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.943645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.943959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.943966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.944252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.944260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.944437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.944444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.944839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.944846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.945136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-12-09 11:44:49.945144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-12-09 11:44:49.945353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.945733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.945741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.946058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.946065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.946368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.946375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.946671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.946678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.947001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.947008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.947306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.947314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.947469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.947755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.947935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.947945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.948129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.948137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.948406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.948412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.948737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.948744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.949080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.949088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.949278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.949285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.949662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.949669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.949857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.949864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.950207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.950216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.950434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.950443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.950784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.951136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.951143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.951299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.951307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.951593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.951600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.951773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.951781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.952077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.952085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.952389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.952551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.952558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.952824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.952832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.953199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.953206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.953372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.953380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.953663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.953671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.953857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.953864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.954197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.954205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.954481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.954489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.954835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.954843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.955145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.955152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.955346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.955353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.955552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.955559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-12-09 11:44:49.955829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-12-09 11:44:49.955837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.956157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.956166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.956468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.956475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.956934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.956942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.957145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.957152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.957464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.957472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.957765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.957772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.957953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.957961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.958307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.958315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.958614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.958621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.958786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.958794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.959079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.959088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.959225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.959232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.959412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.959419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.959584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.959590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.959778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.959785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.960060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.960068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.960375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.960383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.960682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.960689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.960946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.960954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.961262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.961269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.961571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.961578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.961914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.961922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.962131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.962139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.962327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.962335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.962713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.962721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.963037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.963044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.963261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.963269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.963606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.963613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.963910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.963917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.964326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.964334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.964639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.964646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.964815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.964824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.964862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.964871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.965192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.965199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.965502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.965509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.965802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.965809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.966128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.966135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.966445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-12-09 11:44:49.966452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-12-09 11:44:49.966769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.966776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.967105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.967112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.967442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.967449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.967773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.967781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.967941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.967949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.968239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.968247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.972446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.972475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.972771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.972780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.973306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.973335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.973517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.973527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.973858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.973866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.974166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.974174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.974474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.974485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.974808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.974816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.975047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.975055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.975268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.975277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.975542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.975885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.975892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.976102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.976111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.976446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.976454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.976762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.976969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.976977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.977181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.977188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.977363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.977378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.977422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.977430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.977714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.977721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.977914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.978256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.978264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.978559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.978567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.978735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.979073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.979080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.979268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.979277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.980063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.980070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.980285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.980292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.980571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.981025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-12-09 11:44:49.981032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-12-09 11:44:49.981338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.981345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.981646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.981654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.981835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.981843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.982136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.982143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.982525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.982533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.982872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.982880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.983210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.983218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.983521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.983701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.983708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.983858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.983865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.984199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.984207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.984536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.984544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.984714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.984723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.984950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.985854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.985861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.986111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.986119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.986432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.986439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.986619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.986627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.986919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.987067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.987074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.987296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.987303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.987662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.987670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.987995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.988003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.988197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.988207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.988564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.988571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.988765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.988986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.988994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.989283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.989291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.989618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-12-09 11:44:49.989626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-12-09 11:44:49.989949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.989956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.990180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.990188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.990518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.990826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.990833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.991171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.991179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.991390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.991405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.991793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.992823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.992831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.993233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.993273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.993280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.993584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.993591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.993911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.993919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.994260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.994596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.994604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.995079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.995089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.995249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.995257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.995493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.995500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.995874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.995881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.996038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.996046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.996294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.996302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.996597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.996604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.996922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.996929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.997110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.997118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.997303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.997310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.997597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.997604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.997819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.997827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.998007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.998017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.998209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.998218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.998385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.998393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.998677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.998684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.999008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.999019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.999184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-12-09 11:44:49.999190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-12-09 11:44:49.999346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:49.999353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:49.999641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:49.999649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:49.999840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:49.999848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.000150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.000158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.000468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.000475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.000652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.000664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.000830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.000837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.001109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.001116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.001300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.001307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.001604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.001612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.001943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.001950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.002027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.002268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.002275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.002485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.002494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.002676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.002682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.003068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.003075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.003252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.003259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.003557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.003564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.003732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.003740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.004137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.004145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.004354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.004361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.004566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.004757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.004768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.005056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.005064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.005389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.005397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.006002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.006257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.006265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.006478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.006692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.006699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.006941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.006949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.007199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.007207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.007601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.007608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.007936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.008225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.008232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.008419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.008426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.008806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.008813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.009135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.009143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.009438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.009445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.009687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-12-09 11:44:50.009694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-12-09 11:44:50.010093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.010101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.010467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.010474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.010664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.010672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.010916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.011244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.011252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.011564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.011572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.011775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.011966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.011975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.012282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.012482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.012707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.012715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.012874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.013123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.013130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.013319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.013560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.013568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.013849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.013856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.014097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.014104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.014421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.014429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.014621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.014628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.014931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.014939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.015267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.015274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-12-09 11:44:50.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-12-09 11:44:50.015459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.015796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.015804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.016138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.016149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.016465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.016472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.016651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.016658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.016915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.016922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.017144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.017152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.017480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.017486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.017672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.017679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-12-09 11:44:50.017934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.172 [2024-12-09 11:44:50.017942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.018121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.018432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.018439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.018629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.018918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.018925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.019211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.019218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.019400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.019408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.019515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.019522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.019815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.019822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.020141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.020148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.020399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.020408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.020593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.020600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.020668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.020674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.020981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.020988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.021329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.021336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.021630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.021823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.021829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.022109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.022116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.022515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.022522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.022826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.022833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.023160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.023167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.023358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.023366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.023572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.023579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.023895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.023902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.024237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.024244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.024634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.024641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.024815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.024822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.025074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.025404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.025739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.025746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.025966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.026138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.026145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.026332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.026339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.026618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.026814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.026822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.027119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.027127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.027317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.027324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.027502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.173 [2024-12-09 11:44:50.027510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.173 qpair failed and we were unable to recover it. 00:29:58.173 [2024-12-09 11:44:50.027714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.027721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.028091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.028098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.028270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.028277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.028656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.028662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.028837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.029242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.029632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.029639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.029679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.029685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.029869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.029877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.030077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.030084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.030124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.030131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.030489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.030496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.030829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.030836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.031056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.031063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.031242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.031249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.031574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.031580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.031926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.031933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.032229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.032236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.174 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:58.174 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.174 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.174 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.174 [2024-12-09 11:44:50.033440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.033459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.033627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.033636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.033963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.034148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.034157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.034450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.034457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.034643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.034652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.034957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.034964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.035324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.035332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.035444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.035450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.035836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.036134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.036142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.036457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.036464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.036761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.036769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.036977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.036984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.037328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.037337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.037495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.037505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.037794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.037801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.174 [2024-12-09 11:44:50.038002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.174 [2024-12-09 11:44:50.038009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.174 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.038337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.038346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.038557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.038567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.038753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.038768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.038965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.038972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.039041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.039049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.039392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.039399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.039564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.039735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.039743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.039903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.039910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.040147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.040156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.040322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.040337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.040549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.040557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.040754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.040762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.041082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.041090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.041431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.041439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.041774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.041782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.041973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.041981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.042310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.042318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.042525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.042533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.042877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.042885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.043062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.043372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.043380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.043644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.043652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.043981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.043988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.044343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.044684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.044692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.044868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.044875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.045150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.045157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.045482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.045793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.045802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.046082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.046090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.046421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.046635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.046642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.046984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.046992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.047272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.047280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.047450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.047457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.047741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.047748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.047898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.175 [2024-12-09 11:44:50.047906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.175 qpair failed and we were unable to recover it. 00:29:58.175 [2024-12-09 11:44:50.048265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.048273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.048535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.048542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.048752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.048760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.048930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.049229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.049239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.049421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.049428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.049738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.049745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.049900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.049909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.050199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.050206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.050366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.050373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.050626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.050947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.050954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.051278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.051286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.051617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.051626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.051799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.051806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.052162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.052352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.052361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.052502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.052510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.053131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.053435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.053473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0030000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.053801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.054159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.054167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.054479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.054486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.054808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.054815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.055136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.055143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.055482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.055489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.055664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.055672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.055867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.055874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.056072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.056080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.056460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.056468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.056819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.056828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.057017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.057025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.057174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.057181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.057361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.057369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.057673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.057681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.057839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.057847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.058005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.058016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.058298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.176 [2024-12-09 11:44:50.058306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.176 qpair failed and we were unable to recover it. 00:29:58.176 [2024-12-09 11:44:50.058644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.058651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.058816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.058825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.059026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.059034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.059345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.059352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.059546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.059555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.059848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.059857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.059891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.059898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.060168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.060176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.060505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.060512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.060823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.060830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.061162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.061170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.061343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.061351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.061609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.061616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.061963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.061970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.062089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.062096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.062406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.062413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.062726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.062734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.062909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.062917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.063156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.063164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.063623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.063631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.063747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.063754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.064112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.064119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.064172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.064348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.064355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.064523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.064530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.064820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.064828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.065123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.065130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.065308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.065314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.065642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.065650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.065863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.066160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.066168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.066480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.066487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.066802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.066809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.177 [2024-12-09 11:44:50.067134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.177 [2024-12-09 11:44:50.067142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.177 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.067463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.067471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.067666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.067674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.067898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.067906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.068183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.068190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.068394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.068401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.068578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.068585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.068957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.068964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.069286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.069296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.069470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.069477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.069644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.069652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.069826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.069835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.070065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.070384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.070551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.070559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.070778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.070786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.070982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.070990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.071404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.071713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.071722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.072159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.072167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.072330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.072338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.072659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.072667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.178 [2024-12-09 11:44:50.072905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.072914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.178 [2024-12-09 11:44:50.073219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.178 [2024-12-09 11:44:50.073614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.073623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.178 [2024-12-09 11:44:50.073864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.073872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.074175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.074182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.074478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.074485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.074809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.075109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.075117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.075329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.075337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.075618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.075626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.075803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.075812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.075973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.075980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.076328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.076335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.076631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.076841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.178 [2024-12-09 11:44:50.076849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.178 qpair failed and we were unable to recover it. 00:29:58.178 [2024-12-09 11:44:50.077054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.077061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.077376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.077383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.077709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.077717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.077911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.077918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.078206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.078215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.078366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.078373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.078683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.078871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.078880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.079063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.079070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.079361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.079370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.079693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.079700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.079952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.079960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.080972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.080979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.081303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.081310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.081615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.081622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.082021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.082028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.082193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.082200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.082487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.082494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.082808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.082815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.083143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.083150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.083465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.083472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.083655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.083996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.084004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.084305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.084312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.084624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.084631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.085170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.085178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.085492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.085499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.085789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.085796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.085975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.085982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.086136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.086143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.086436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.086444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.179 [2024-12-09 11:44:50.086752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.179 qpair failed and we were unable to recover it. 00:29:58.179 [2024-12-09 11:44:50.086909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.086916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.087242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.087250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.087564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.087572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.087866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.088228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.088235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.088539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.088547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.088730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.088737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.089033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.089041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.089240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.089247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.089608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.089615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.089924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.089931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.090239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.090246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.090426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.090434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.090793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.090800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.091088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.091381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.091388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.091711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.091914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.091922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.092132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.092139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.092362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.092370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.092651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.092658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.092954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.092961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.093269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.093277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.093597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.093604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.093776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.093784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.094126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.094133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.094310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.094317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.094636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.094643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.094961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.094968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.095275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.095563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.095571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.095910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.095918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.096348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.096355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.096695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.096702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.096920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.096928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.097162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.097169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.097470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.097478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.097799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.097806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.180 qpair failed and we were unable to recover it. 00:29:58.180 [2024-12-09 11:44:50.098140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.180 [2024-12-09 11:44:50.098149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.098478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.098485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.098878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.098886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.099180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.099188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.099374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.099383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.099664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.099671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.099866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.099874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.100035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.100042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.100356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.100363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.100682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.100689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.101089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.101413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.101420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.101715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.101721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.102134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.102142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.102457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.102464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.102755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.102762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.103059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.103287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.103294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.103606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.103613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.103905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.103912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.104219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.104226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.104411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 Malloc0 00:29:58.181 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.181 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.181 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.181 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.181 [2024-12-09 11:44:50.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.105129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.105261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.105270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.105562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.105569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.105793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.105801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.106029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.106180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.106187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.106489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.106497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.106668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.106675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.106875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.107179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.181 [2024-12-09 11:44:50.107187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.181 qpair failed and we were unable to recover it. 00:29:58.181 [2024-12-09 11:44:50.107378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.107755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.107762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.107968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.107976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.108179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.182 [2024-12-09 11:44:50.108299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.108307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.108656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.108663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.108843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.108851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.109040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.109047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.109303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.109310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.109514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.109522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.109821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.109829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.109996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.110003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.110307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.110316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.110514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.110523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.110600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.110607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.110911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.110918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.111205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.111213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.111591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.111598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.111770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.111778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.112078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.112085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.112436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.112445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.112635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.112643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.112953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.112960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.113161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.113169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.113386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.113393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.113572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.113628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.113637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.113941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.113949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.114346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.114353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.114653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.114659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.114829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.114837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.115051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.115059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.115423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.115430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.115755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.115925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.115933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.116324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.182 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.182 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.182 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.182 [2024-12-09 11:44:50.116898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.182 [2024-12-09 11:44:50.116910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.182 qpair failed and we were unable to recover it. 00:29:58.182 [2024-12-09 11:44:50.117163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.117171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.117387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.117394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.117642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.117649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.117807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.117814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.118108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.118115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.118292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.118300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.118489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.118821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.118828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.119107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.119118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.119461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.119469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.119652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.119659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.119802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.119809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.120133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.120140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.120418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.120425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.120750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.120757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.120827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.120834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.121016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.121024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.121374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.121381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.121550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.121557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.121726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.121733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.121903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.121910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.122123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.122130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.122372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.122380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.122682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.122689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.122886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.123199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.123536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.123863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.123871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.124185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.124193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.183 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.183 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.183 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.183 [2024-12-09 11:44:50.124883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.125225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.125234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.125542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.125550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.125726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.125733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.126091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.126258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.126266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.126560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.126567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.183 [2024-12-09 11:44:50.126758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.183 [2024-12-09 11:44:50.126767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.183 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.127005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.127016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.127336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.127343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.127533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.127815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.127822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.128126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.128134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.128433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.128440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.128797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.128804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.128881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.128887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.129205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.129213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.129450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.129500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.129506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.129780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.129788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.130088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.130095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.130273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.130281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.130457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.130464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.130821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.130828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.131142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.131150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.131356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.131567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.131844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.131850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.132175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.132183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.132368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.132376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.184 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.184 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.184 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.184 [2024-12-09 11:44:50.132811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.133017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.133025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.133197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.133204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.133586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.133593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.133754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.133762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.134072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.134079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.134300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.134473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.134481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.134553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.134850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.134857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.135149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.135156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.135443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.135450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.135751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.135761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.184 [2024-12-09 11:44:50.136056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.184 [2024-12-09 11:44:50.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.184 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.136438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.185 [2024-12-09 11:44:50.136430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.185 [2024-12-09 11:44:50.136446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0028000b90 with addr=10.0.0.2, port=4420 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.138909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.138974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.138988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.138994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.138999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.139018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.185 [2024-12-09 11:44:50.148828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.148910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.148921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.148927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.148931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.148942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.185 11:44:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3718392 00:29:58.185 [2024-12-09 11:44:50.158726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.158815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.158825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.158833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.158838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.158848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.168728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.168785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.168795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.168801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.168805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.168815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.178824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.178874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.178884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.178889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.178893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.178904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.188695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.188751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.188761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.188767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.188772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.188783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.198823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.198873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.198883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.198888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.198893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.198907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.208876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.208930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.208940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.208945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.208949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.208959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.218933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.218985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.218996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.219002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.219006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.219020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.228816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.228866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.228876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.228881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.228886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.228896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.238935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.238982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.238992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.238997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.239002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.239015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.185 qpair failed and we were unable to recover it. 00:29:58.185 [2024-12-09 11:44:50.248990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.185 [2024-12-09 11:44:50.249055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.185 [2024-12-09 11:44:50.249065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.185 [2024-12-09 11:44:50.249070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.185 [2024-12-09 11:44:50.249074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.185 [2024-12-09 11:44:50.249084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.258897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.258993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.259003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.259009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.259017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.259028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.269069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.269143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.269153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.269158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.269162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.269172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.279060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.279110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.279120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.279125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.279129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.279139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.289068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.289118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.289131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.289136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.289140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.289151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.299116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.299171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.299180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.299185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.299190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.299200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.309156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.309227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.309237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.309242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.309246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.309256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.186 [2024-12-09 11:44:50.319171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.186 [2024-12-09 11:44:50.319221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.186 [2024-12-09 11:44:50.319233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.186 [2024-12-09 11:44:50.319238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.186 [2024-12-09 11:44:50.319242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.186 [2024-12-09 11:44:50.319253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.186 qpair failed and we were unable to recover it. 00:29:58.448 [2024-12-09 11:44:50.329196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.329245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.329255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.329261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.329268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.329278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.339279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.339331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.339341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.339346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.339351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.339361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.349207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.349256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.349266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.349271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.349275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.349286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.359297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.359348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.359358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.359363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.359367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.359377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.369287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.369336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.369346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.369350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.369355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.369365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.379524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.379692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.379702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.379707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.379711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.379722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.389312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.389365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.389375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.389380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.389385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.389395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.399310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.399364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.399375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.399380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.399384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.399395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.409469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.409521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.409530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.409535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.409540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.409550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.419504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.419556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.419568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.419573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.419577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.419587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.429488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.429542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.429551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.429556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.429560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.429570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.439485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.439537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.439547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.439551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.439556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.439566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.449537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.449 [2024-12-09 11:44:50.449587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.449 [2024-12-09 11:44:50.449597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.449 [2024-12-09 11:44:50.449602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.449 [2024-12-09 11:44:50.449606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.449 [2024-12-09 11:44:50.449616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.449 qpair failed and we were unable to recover it. 00:29:58.449 [2024-12-09 11:44:50.459577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.459629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.459639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.459644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.459651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.459661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.469573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.469624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.469634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.469639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.469644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.469654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.479598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.479644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.479653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.479658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.479663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.479672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.489664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.489714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.489724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.489729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.489733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.489744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.499694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.499741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.499750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.499755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.499760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.499770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.509741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.509814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.509832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.509839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.509843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.509857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.519700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.519755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.519773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.519779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.519784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.519798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.529627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.529682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.529695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.529700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.529705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.529717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.539798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.539849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.539859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.539864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.539869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.539879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.549807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.549866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.549884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.549890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.549895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.549909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.559820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.559872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.559890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.559897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.559902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.559915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.569930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.569984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.569995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.570000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.570004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.570019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.579937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.580019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.580029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.450 [2024-12-09 11:44:50.580034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.450 [2024-12-09 11:44:50.580038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.450 [2024-12-09 11:44:50.580049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.450 qpair failed and we were unable to recover it. 00:29:58.450 [2024-12-09 11:44:50.589942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.450 [2024-12-09 11:44:50.589988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.450 [2024-12-09 11:44:50.589999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.451 [2024-12-09 11:44:50.590007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.451 [2024-12-09 11:44:50.590015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.451 [2024-12-09 11:44:50.590026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.451 qpair failed and we were unable to recover it. 00:29:58.451 [2024-12-09 11:44:50.599943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.451 [2024-12-09 11:44:50.599985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.451 [2024-12-09 11:44:50.599995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.451 [2024-12-09 11:44:50.600000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.451 [2024-12-09 11:44:50.600004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.451 [2024-12-09 11:44:50.600018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.451 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.610003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.610109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.610119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.610124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.610128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.610139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.620045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.620100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.620110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.620115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.620119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.620129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.630047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.630094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.630103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.630108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.630112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.630125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.640070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.640116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.640125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.640130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.640135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.640145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.649981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.650037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.650047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.650053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.650057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.650067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.660166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.660211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.660221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.660226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.660231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.660241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.670055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.670101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.670111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.670116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.670120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.670130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.680195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.680253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.680263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.680268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.680272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.680282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.690211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.690262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.690272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.690277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.690282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.690292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.700277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.700330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.700339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.700344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.700348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.713 [2024-12-09 11:44:50.700359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.713 qpair failed and we were unable to recover it. 00:29:58.713 [2024-12-09 11:44:50.710293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.713 [2024-12-09 11:44:50.710338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.713 [2024-12-09 11:44:50.710347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.713 [2024-12-09 11:44:50.710352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.713 [2024-12-09 11:44:50.710357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.710366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.720309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.720359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.720369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.720376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.720381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.720391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.730364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.730455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.730464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.730469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.730473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.730483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.740378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.740432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.740443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.740448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.740452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.740462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.750382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.750427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.750436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.750441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.750446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.750456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.760416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.760463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.760473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.760478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.760482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.760495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.770439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.770485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.770495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.770500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.770505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.770515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.780490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.780542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.780551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.780556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.780560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.780570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.790505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.790554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.790563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.790568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.790572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.790583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.800526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.800582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.800592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.800596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.800601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.800610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.810572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.810637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.810647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.810652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.810656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.810666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.820594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.820643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.820653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.820658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.820663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.820673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.830614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.830664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.830674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.830679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.830683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.830694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.840516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.714 [2024-12-09 11:44:50.840592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.714 [2024-12-09 11:44:50.840604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.714 [2024-12-09 11:44:50.840609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.714 [2024-12-09 11:44:50.840613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.714 [2024-12-09 11:44:50.840624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.714 qpair failed and we were unable to recover it. 00:29:58.714 [2024-12-09 11:44:50.850683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.715 [2024-12-09 11:44:50.850777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.715 [2024-12-09 11:44:50.850790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.715 [2024-12-09 11:44:50.850795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.715 [2024-12-09 11:44:50.850799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.715 [2024-12-09 11:44:50.850810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.715 qpair failed and we were unable to recover it. 00:29:58.715 [2024-12-09 11:44:50.860760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.715 [2024-12-09 11:44:50.860832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.715 [2024-12-09 11:44:50.860843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.715 [2024-12-09 11:44:50.860848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.715 [2024-12-09 11:44:50.860852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.715 [2024-12-09 11:44:50.860863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.715 qpair failed and we were unable to recover it. 00:29:58.715 [2024-12-09 11:44:50.870745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.715 [2024-12-09 11:44:50.870793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.715 [2024-12-09 11:44:50.870803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.715 [2024-12-09 11:44:50.870808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.715 [2024-12-09 11:44:50.870813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.715 [2024-12-09 11:44:50.870823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.715 qpair failed and we were unable to recover it. 00:29:58.977 [2024-12-09 11:44:50.880763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.977 [2024-12-09 11:44:50.880808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.977 [2024-12-09 11:44:50.880818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.880823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.880827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.880837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.890788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.890841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.890851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.890856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.890864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.890875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.900838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.900887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.900897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.900902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.900906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.900916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.910804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.910864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.910883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.910888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.910892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.910907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.920864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.920954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.920965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.920970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.920974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.920984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.930899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.930957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.930967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.930972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.930976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.930986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.940934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.940982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.940992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.940997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.941001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.941014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.950825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.950894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.950904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.950909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.950913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.950924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.961008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.961058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.961068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.961072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.961077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.961087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.970903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.970998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.971008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.971016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.971020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.971030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.980966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.981018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.981030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.981035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.981039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.981050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:50.990943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:50.990993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:50.991003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:50.991008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:50.991015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:50.991025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:51.001076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:51.001129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.978 [2024-12-09 11:44:51.001138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.978 [2024-12-09 11:44:51.001143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.978 [2024-12-09 11:44:51.001148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.978 [2024-12-09 11:44:51.001158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.978 qpair failed and we were unable to recover it. 00:29:58.978 [2024-12-09 11:44:51.011101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.978 [2024-12-09 11:44:51.011152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.011162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.011167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.011172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.011182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.021158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.021211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.021221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.021225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.021232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.021242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.031169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.031221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.031230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.031235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.031239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.031249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.041206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.041280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.041289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.041294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.041298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.041308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.051222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.051271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.051281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.051286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.051290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.051300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.061289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.061340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.061349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.061354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.061358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.061368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.071289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.071340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.071349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.071354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.071358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.071368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.081316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.081376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.081385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.081390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.081394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.081404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.091318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.091370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.091380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.091385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.091389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.091399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.101256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.101307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.101316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.979 [2024-12-09 11:44:51.101321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.979 [2024-12-09 11:44:51.101325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.979 [2024-12-09 11:44:51.101335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.979 qpair failed and we were unable to recover it. 00:29:58.979 [2024-12-09 11:44:51.111389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.979 [2024-12-09 11:44:51.111440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.979 [2024-12-09 11:44:51.111450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.980 [2024-12-09 11:44:51.111455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.980 [2024-12-09 11:44:51.111459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.980 [2024-12-09 11:44:51.111469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.980 qpair failed and we were unable to recover it. 00:29:58.980 [2024-12-09 11:44:51.121377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.980 [2024-12-09 11:44:51.121419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.980 [2024-12-09 11:44:51.121428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.980 [2024-12-09 11:44:51.121433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.980 [2024-12-09 11:44:51.121437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.980 [2024-12-09 11:44:51.121447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.980 qpair failed and we were unable to recover it. 00:29:58.980 [2024-12-09 11:44:51.131458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.980 [2024-12-09 11:44:51.131551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.980 [2024-12-09 11:44:51.131561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.980 [2024-12-09 11:44:51.131566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.980 [2024-12-09 11:44:51.131570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:58.980 [2024-12-09 11:44:51.131580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.980 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.141491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.141537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.141548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.141553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.141557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.141568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.151500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.151546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.151557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.151566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.151571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.151581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.161563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.161646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.161656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.161661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.161665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.161675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.171545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.171599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.171609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.171614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.171618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.171628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.181586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.181637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.181646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.181651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.181655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.181665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.191669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.191715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.191726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.191731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.191735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.191748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.201624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.201676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.201686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.201690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.201695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.201705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.211653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.211702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.211713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.211718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.211722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.211732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.221721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.221772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.221782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.221787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.221791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.221801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.231734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.231788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.231797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.231802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.231806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.231816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.241766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.241820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.241830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.241835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.241839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.241849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.251801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.251850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.251860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.251865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.251869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.243 [2024-12-09 11:44:51.251879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.243 qpair failed and we were unable to recover it. 00:29:59.243 [2024-12-09 11:44:51.261837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.243 [2024-12-09 11:44:51.261910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.243 [2024-12-09 11:44:51.261920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.243 [2024-12-09 11:44:51.261925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.243 [2024-12-09 11:44:51.261929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.261939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.271870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.271918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.271927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.271932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.271936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.271946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.281883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.281927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.281938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.281943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.281947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.281958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.291811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.291860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.291870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.291875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.291880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.291890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.301927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.301978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.301987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.301992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.301996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.302006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.311842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.311891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.311901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.311906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.311910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.311920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.321992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.322048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.322058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.322062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.322067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.322079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.331887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.331936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.331945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.331950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.331954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.331964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.342057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.342106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.342115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.342120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.342124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.342135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.352052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.352103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.352112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.352117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.352121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.352131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.362079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.362123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.362133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.362138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.362142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.362153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.372122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.372174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.372183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.372188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.372192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.372202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.382150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.382245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.382255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.382259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.382264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.382274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.392177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.244 [2024-12-09 11:44:51.392231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.244 [2024-12-09 11:44:51.392241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.244 [2024-12-09 11:44:51.392246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.244 [2024-12-09 11:44:51.392250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.244 [2024-12-09 11:44:51.392260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.244 qpair failed and we were unable to recover it. 00:29:59.244 [2024-12-09 11:44:51.402202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.507 [2024-12-09 11:44:51.402247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.507 [2024-12-09 11:44:51.402256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.507 [2024-12-09 11:44:51.402261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.507 [2024-12-09 11:44:51.402265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.507 [2024-12-09 11:44:51.402275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.507 qpair failed and we were unable to recover it. 00:29:59.507 [2024-12-09 11:44:51.412098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.507 [2024-12-09 11:44:51.412152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.507 [2024-12-09 11:44:51.412165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.507 [2024-12-09 11:44:51.412170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.507 [2024-12-09 11:44:51.412174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.507 [2024-12-09 11:44:51.412185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.507 qpair failed and we were unable to recover it. 00:29:59.507 [2024-12-09 11:44:51.422128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.507 [2024-12-09 11:44:51.422182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.507 [2024-12-09 11:44:51.422191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.507 [2024-12-09 11:44:51.422196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.507 [2024-12-09 11:44:51.422200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.507 [2024-12-09 11:44:51.422211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.507 qpair failed and we were unable to recover it. 00:29:59.507 [2024-12-09 11:44:51.432290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.507 [2024-12-09 11:44:51.432337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.507 [2024-12-09 11:44:51.432346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.507 [2024-12-09 11:44:51.432351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.507 [2024-12-09 11:44:51.432356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.507 [2024-12-09 11:44:51.432365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.507 qpair failed and we were unable to recover it. 00:29:59.507 [2024-12-09 11:44:51.442317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.507 [2024-12-09 11:44:51.442362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.507 [2024-12-09 11:44:51.442372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.507 [2024-12-09 11:44:51.442377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.507 [2024-12-09 11:44:51.442382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.507 [2024-12-09 11:44:51.442393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.452351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.452438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.452447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.452452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.452460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.452470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.462368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.462417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.462426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.462432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.462436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.462446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.472396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.472449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.472458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.472464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.472468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.472478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.482397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.482452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.482461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.482466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.482470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.482480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.492453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.492502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.492512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.492517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.492521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.492532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.502482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.502533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.502542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.502547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.502552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.502561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.512466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.512512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.512521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.512526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.512530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.512540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.522500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.522582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.522592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.522597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.522603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.522613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.532534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.532601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.532610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.532615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.532619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.532629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.542555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.542604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.542616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.542621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.542625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.542635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.552496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.552547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.552557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.552562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.552566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.552576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.562510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.562563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.562572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.562577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.562581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.562592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.572674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.572727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.508 [2024-12-09 11:44:51.572738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.508 [2024-12-09 11:44:51.572743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.508 [2024-12-09 11:44:51.572747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.508 [2024-12-09 11:44:51.572758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.508 qpair failed and we were unable to recover it. 00:29:59.508 [2024-12-09 11:44:51.582706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.508 [2024-12-09 11:44:51.582754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.582764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.582774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.582778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.582788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.592715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.592771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.592789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.592795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.592800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.592814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.602611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.602666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.602677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.602682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.602687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.602698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.612767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.612820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.612833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.612838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.612842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.612853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.622780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.622833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.622851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.622857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.622862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.622876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.632826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.632879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.632897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.632903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.632908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.632922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.642848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.642896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.642906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.642911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.642916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.642926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.652918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.653006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.653020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.653025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.653030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.653040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.509 [2024-12-09 11:44:51.662797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.509 [2024-12-09 11:44:51.662868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.509 [2024-12-09 11:44:51.662877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.509 [2024-12-09 11:44:51.662882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.509 [2024-12-09 11:44:51.662886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.509 [2024-12-09 11:44:51.662897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.509 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.672944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.672997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.673007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.673016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.673020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.673031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.682973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.683025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.683034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.683039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.683043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.683054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.693003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.693061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.693072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.693076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.693081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.693091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.703005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.703109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.703119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.703124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.703128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.703138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.713046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.713091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.713100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.713109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.713113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.713123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.722951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.723014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.723025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.723030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.723034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.723045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.732990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.733049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.733060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.733065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.733069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.733080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.743040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.743100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.743110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.743115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.743119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.743130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.753157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.753208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.753217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.773 [2024-12-09 11:44:51.753222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.773 [2024-12-09 11:44:51.753226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.773 [2024-12-09 11:44:51.753239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.773 qpair failed and we were unable to recover it. 00:29:59.773 [2024-12-09 11:44:51.763152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.773 [2024-12-09 11:44:51.763197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.773 [2024-12-09 11:44:51.763206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.763211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.763215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.763225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.773198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.773247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.773256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.773261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.773265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.773275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.783304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.783361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.783372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.783376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.783381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.783391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.793290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.793338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.793348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.793353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.793357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.793367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.803304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.803357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.803366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.803371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.803375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.803385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.813349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.813397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.813407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.813411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.813416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.813426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.823354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.823448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.823457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.823461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.823466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.823476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.833242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.833290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.833299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.833304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.833308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.833319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.843402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.843446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.843458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.843463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.843467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.843477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.853449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.853497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.853506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.853511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.853516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.853526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.863351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.863405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.863415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.863419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.863424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.863434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.873374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.873419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.873428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.873433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.873438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.873447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.883397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.883447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.883456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.883461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.883465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.774 [2024-12-09 11:44:51.883478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.774 qpair failed and we were unable to recover it. 00:29:59.774 [2024-12-09 11:44:51.893542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.774 [2024-12-09 11:44:51.893594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.774 [2024-12-09 11:44:51.893604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.774 [2024-12-09 11:44:51.893609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.774 [2024-12-09 11:44:51.893613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.775 [2024-12-09 11:44:51.893623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.775 qpair failed and we were unable to recover it. 00:29:59.775 [2024-12-09 11:44:51.903633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.775 [2024-12-09 11:44:51.903706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.775 [2024-12-09 11:44:51.903715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.775 [2024-12-09 11:44:51.903720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.775 [2024-12-09 11:44:51.903724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.775 [2024-12-09 11:44:51.903735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.775 qpair failed and we were unable to recover it. 00:29:59.775 [2024-12-09 11:44:51.913599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.775 [2024-12-09 11:44:51.913645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.775 [2024-12-09 11:44:51.913655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.775 [2024-12-09 11:44:51.913660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.775 [2024-12-09 11:44:51.913664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.775 [2024-12-09 11:44:51.913674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.775 qpair failed and we were unable to recover it. 00:29:59.775 [2024-12-09 11:44:51.923501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.775 [2024-12-09 11:44:51.923556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.775 [2024-12-09 11:44:51.923566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.775 [2024-12-09 11:44:51.923571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.775 [2024-12-09 11:44:51.923575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:29:59.775 [2024-12-09 11:44:51.923585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.775 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.933666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.933747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.933757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.933762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.933766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.933776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.943652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.943701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.943710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.943715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.943720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.943729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.953587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.953632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.953642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.953647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.953651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.953662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.963780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.963848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.963858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.963862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.963867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.963877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.973773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.973825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.973838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.973844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.973848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.973859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.983831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.983882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.983893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.983898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.983903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.983913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:51.993872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:51.993941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:51.993951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:51.993956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:51.993960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:51.993970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:52.003861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:52.003906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:52.003916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:52.003921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:52.003926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:52.003936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:52.013897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:52.013946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:52.013955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:52.013960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:52.013968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:52.013978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:52.023984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:52.024042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:52.024053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:52.024058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:52.024062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:52.024072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:52.033846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:52.033900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:52.033910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:52.033914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:52.033919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:52.033929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.038 qpair failed and we were unable to recover it. 00:30:00.038 [2024-12-09 11:44:52.044000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.038 [2024-12-09 11:44:52.044071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.038 [2024-12-09 11:44:52.044081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.038 [2024-12-09 11:44:52.044086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.038 [2024-12-09 11:44:52.044091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.038 [2024-12-09 11:44:52.044101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.053865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.053912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.053921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.053926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.053931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.053941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.063903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.063953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.063963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.063968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.063972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.063983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.074046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.074096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.074106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.074110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.074115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.074125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.084075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.084171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.084180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.084185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.084189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.084199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.093990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.094046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.094056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.094061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.094065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.094076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.104142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.104192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.104205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.104210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.104214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.104225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.114167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.114213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.114223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.114228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.114233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.114243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.124201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.124246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.124256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.124261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.124265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.124275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.134228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.134319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.134329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.134334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.134338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.134348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.144124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.144185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.144194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.144202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.144206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.144217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.154283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.154339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.154348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.154353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.154358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.154368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.164309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.164358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.164368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.164373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.164377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.164387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.174343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.174394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.039 [2024-12-09 11:44:52.174404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.039 [2024-12-09 11:44:52.174409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.039 [2024-12-09 11:44:52.174413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.039 [2024-12-09 11:44:52.174423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.039 qpair failed and we were unable to recover it. 00:30:00.039 [2024-12-09 11:44:52.184374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.039 [2024-12-09 11:44:52.184423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.040 [2024-12-09 11:44:52.184432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.040 [2024-12-09 11:44:52.184437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.040 [2024-12-09 11:44:52.184442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.040 [2024-12-09 11:44:52.184451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.040 qpair failed and we were unable to recover it. 00:30:00.040 [2024-12-09 11:44:52.194265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.040 [2024-12-09 11:44:52.194319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.040 [2024-12-09 11:44:52.194330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.040 [2024-12-09 11:44:52.194335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.040 [2024-12-09 11:44:52.194340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.040 [2024-12-09 11:44:52.194350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.040 qpair failed and we were unable to recover it. 00:30:00.302 [2024-12-09 11:44:52.204284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.204329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.204339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.204344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.204348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.204359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.214467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.214517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.214526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.214531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.214536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.214546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.224490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.224542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.224552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.224557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.224562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.224571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.234495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.234545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.234554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.234560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.234564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.234574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.244535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.244584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.244594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.244598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.244603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.244612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.254537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.254585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.254594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.254599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.254604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.254613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.264602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.264651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.264660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.264665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.264669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.264679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.274627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.274676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.274685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.274693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.274697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.274707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.284600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.284642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.284653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.284658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.284662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.284672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.294658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.294705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.294715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.294720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.294725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.294735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.304708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.304755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.304764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.304769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.304773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.304783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.314647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.314697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.314707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.314712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.303 [2024-12-09 11:44:52.314716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.303 [2024-12-09 11:44:52.314729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.303 qpair failed and we were unable to recover it. 00:30:00.303 [2024-12-09 11:44:52.324748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.303 [2024-12-09 11:44:52.324793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.303 [2024-12-09 11:44:52.324803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.303 [2024-12-09 11:44:52.324808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.324813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.324823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.334786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.334870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.334880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.334885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.334889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.334899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.344824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.344906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.344916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.344921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.344925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.344935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.354812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.354860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.354869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.354874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.354879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.354889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.364869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.364915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.364925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.364930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.364934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.364944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.374903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.374951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.374960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.374965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.374969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.374979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.385061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.385118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.385128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.385132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.385137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.385147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.395018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.395063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.395073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.395077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.395082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.395092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.405052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.405098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.405111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.405116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.405120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.405131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.415060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.415108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.415118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.415123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.415127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.415138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.425054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.425106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.425116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.425121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.425125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.425135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.434931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.434977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.434986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.434991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.434995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.435005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.445104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.445153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.445163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.445168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.445174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.445185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.304 [2024-12-09 11:44:52.455132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.304 [2024-12-09 11:44:52.455178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.304 [2024-12-09 11:44:52.455188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.304 [2024-12-09 11:44:52.455193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.304 [2024-12-09 11:44:52.455198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.304 [2024-12-09 11:44:52.455208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.304 qpair failed and we were unable to recover it. 00:30:00.567 [2024-12-09 11:44:52.465145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.567 [2024-12-09 11:44:52.465199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.567 [2024-12-09 11:44:52.465208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.567 [2024-12-09 11:44:52.465214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.567 [2024-12-09 11:44:52.465218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.567 [2024-12-09 11:44:52.465228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.567 qpair failed and we were unable to recover it. 00:30:00.567 [2024-12-09 11:44:52.475056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.567 [2024-12-09 11:44:52.475102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.475111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.475116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.475120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.475130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.485220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.485271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.485281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.485285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.485290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.485300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.495249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.495299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.495310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.495315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.495319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.495329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.505290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.505341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.505351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.505356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.505360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.505370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.515270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.515313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.515322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.515327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.515332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.515341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.525310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.525362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.525371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.525376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.525380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.525390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.535352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.535408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.535421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.535427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.535431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.535442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.545382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.545445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.545455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.545460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.545464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.545475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.555397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.555444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.555453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.555458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.555463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.555473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.565295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.565346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.565356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.565361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.565366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.565377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.575462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.575542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.575551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.575556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.575563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.575574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.585367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.585417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.585427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.585432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.585436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.585446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.595451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.595497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.595507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.568 [2024-12-09 11:44:52.595512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.568 [2024-12-09 11:44:52.595516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.568 [2024-12-09 11:44:52.595526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.568 qpair failed and we were unable to recover it. 00:30:00.568 [2024-12-09 11:44:52.605545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.568 [2024-12-09 11:44:52.605638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.568 [2024-12-09 11:44:52.605647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.605652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.605656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.605666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.615546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.615595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.615605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.615610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.615615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.615625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.625615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.625663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.625673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.625678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.625682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.625692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.635589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.635635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.635645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.635650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.635655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.635665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.645526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.645576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.645586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.645591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.645595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.645605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.655699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.655749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.655758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.655763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.655767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.655777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.665726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.665773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.665785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.665790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.665794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.665805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.675694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.675734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.675743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.675748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.675752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.675762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.685731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.685813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.685823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.685828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.685832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.685843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.695806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.695854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.695863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.695868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.695872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.695883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.705838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.705892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.705902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.705909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.705914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.705924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.715792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.715838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.715848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.715853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.715857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.715867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.569 [2024-12-09 11:44:52.725748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.569 [2024-12-09 11:44:52.725799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.569 [2024-12-09 11:44:52.725808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.569 [2024-12-09 11:44:52.725814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.569 [2024-12-09 11:44:52.725818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.569 [2024-12-09 11:44:52.725828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.569 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.735908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.735955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.735965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.735970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.735975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.735985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.745945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.745996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.746006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.746014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.746019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.746029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.755898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.755941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.755951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.755955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.755960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.755970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.765979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.766028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.766038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.766043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.766047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.766057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.776016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.776068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.776077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.776082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.776086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.776097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.786045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.786096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.786106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.786111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.786115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.786125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.796025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.796099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.796109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.796114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.796118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.796129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.806032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.806073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.806082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.806087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.806091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.806102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.816117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.816171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.816181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.816186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.816191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.816201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.826161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.826228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.826238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.826243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.826247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.826257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.835971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.836016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.836026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.836034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.836039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.836050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.846141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.846186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.846196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.846201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.846206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.832 [2024-12-09 11:44:52.846216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.832 qpair failed and we were unable to recover it. 00:30:00.832 [2024-12-09 11:44:52.856033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.832 [2024-12-09 11:44:52.856073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.832 [2024-12-09 11:44:52.856083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.832 [2024-12-09 11:44:52.856088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.832 [2024-12-09 11:44:52.856092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.856103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.866166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.866211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.866221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.866226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.866230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.866240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.876224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.876264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.876273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.876278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.876282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.876295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.886253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.886293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.886303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.886308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.886312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.886322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.896257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.896328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.896338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.896343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.896347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.896357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.906300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.906346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.906355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.906360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.906364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.906374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.916325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.916375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.916384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.916389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.916394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.916403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.926210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.926249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.926259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.926264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.926268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.926278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.936263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.936302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.936311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.936316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.936320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.936330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.946428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.946478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.946487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.946492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.946496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.946506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.956336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.956377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.956386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.956391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.956395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.956405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.966483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.966520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.966532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.966537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.966541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.966551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.976484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.976527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.976536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.976541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.976545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.976555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:00.833 [2024-12-09 11:44:52.986518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.833 [2024-12-09 11:44:52.986565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.833 [2024-12-09 11:44:52.986574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.833 [2024-12-09 11:44:52.986579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.833 [2024-12-09 11:44:52.986584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:00.833 [2024-12-09 11:44:52.986594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.833 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:52.996539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:52.996583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:52.996593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:52.996598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:52.996602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:52.996613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.006575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.006612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.006621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.006627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.006634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.006645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.016612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.016659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.016668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.016673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.016678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.016688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.026632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.026676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.026686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.026691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.026695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.026705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.036650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.036688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.036697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.036702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.036706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.036716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.046674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.046712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.046721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.046726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.046730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.046740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.056729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.056770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.056780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.056785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.056789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.056799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.066766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.066832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.066841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.066846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.066851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.066862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.076735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.076776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.076785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.076790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.076795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.076805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.086743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.086777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.086786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.086791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.086795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.086805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.096680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.096723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.096735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.096741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.096745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.096755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.106847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.106887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.106897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.096 [2024-12-09 11:44:53.106901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.096 [2024-12-09 11:44:53.106906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.096 [2024-12-09 11:44:53.106916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.096 qpair failed and we were unable to recover it. 00:30:01.096 [2024-12-09 11:44:53.116882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.096 [2024-12-09 11:44:53.116924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.096 [2024-12-09 11:44:53.116934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.116939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.116943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.116953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.126900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.126960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.126969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.126974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.126978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.126988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.136929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.136971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.136981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.136986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.136992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.137003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.146974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.147021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.147030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.147035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.147039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.147049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.156949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.156990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.157000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.157004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.157009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.157022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.167004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.167053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.167062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.167067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.167072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.167082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.176900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.176940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.176950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.176955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.176959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.176969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.187037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.187078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.187088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.187093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.187097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.187107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.197073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.197126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.197136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.197140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.197144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.197154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.207125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.207162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.207172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.207176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.207181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.207191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.217150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.217203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.217213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.217218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.217222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.217232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.227178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.227220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.227232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.227237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.227241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.227251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.237210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.237276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.237286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.237290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.237295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.237305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.097 qpair failed and we were unable to recover it. 00:30:01.097 [2024-12-09 11:44:53.247191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.097 [2024-12-09 11:44:53.247227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.097 [2024-12-09 11:44:53.247236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.097 [2024-12-09 11:44:53.247241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.097 [2024-12-09 11:44:53.247245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.097 [2024-12-09 11:44:53.247256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.098 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.257231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.257271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.257281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.257286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.257291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.257301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.267145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.267190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.267199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.267206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.267211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.267221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.277274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.277311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.277320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.277325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.277329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.277339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.287334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.287373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.287384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.287389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.287393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.287403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.297312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.297350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.297360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.297365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.297369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.297379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.307405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.307448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.307458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.360 [2024-12-09 11:44:53.307462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.360 [2024-12-09 11:44:53.307467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.360 [2024-12-09 11:44:53.307476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.360 qpair failed and we were unable to recover it. 00:30:01.360 [2024-12-09 11:44:53.317269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.360 [2024-12-09 11:44:53.317311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.360 [2024-12-09 11:44:53.317321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.317326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.317331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.317341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.327328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.327375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.327385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.327390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.327394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.327404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.337453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.337500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.337509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.337514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.337519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.337529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.347456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.347497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.347507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.347511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.347516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.347525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.357518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.357599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.357609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.357613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.357618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.357627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.367553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.367594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.367603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.367608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.367612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.367622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.377577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.377616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.377625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.377630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.377634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.377644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.387654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.387698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.387708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.387713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.387718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.387728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.397618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.397667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.397677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.397685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.397689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.397699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.407636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.407695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.407705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.407710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.407714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.407724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.417555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.417596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.417607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.417612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.417617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.417627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.427590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.427632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.427642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.427647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.427652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.427662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.437741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.437792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.437801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.437806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.437810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.361 [2024-12-09 11:44:53.437823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.361 qpair failed and we were unable to recover it. 00:30:01.361 [2024-12-09 11:44:53.447749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.361 [2024-12-09 11:44:53.447786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.361 [2024-12-09 11:44:53.447796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.361 [2024-12-09 11:44:53.447801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.361 [2024-12-09 11:44:53.447805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.447816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.457794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.457832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.457842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.457847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.457851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.457861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.467807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.467854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.467863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.467868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.467873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.467883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.477828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.477869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.477878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.477883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.477887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.477897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.487880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.487920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.487931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.487935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.487940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.487950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.497921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.497963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.497973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.497978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.497982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.497992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.508000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.508044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.508053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.508058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.508063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.508073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.362 [2024-12-09 11:44:53.517975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.362 [2024-12-09 11:44:53.518017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.362 [2024-12-09 11:44:53.518027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.362 [2024-12-09 11:44:53.518032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.362 [2024-12-09 11:44:53.518036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.362 [2024-12-09 11:44:53.518046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.362 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.527945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.527987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.527999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.528004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.528009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.528022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.537884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.537972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.537982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.537987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.537991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.538001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.547929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.547970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.547980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.547985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.547989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.547999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.558088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.558132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.558141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.558146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.558151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.558161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.568099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.568137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.568146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.568152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.568158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.568169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.578093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.578135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.578144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.578149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.578153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.578163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.588169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.588210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.588220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.588225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.588229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.588239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.598146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.598181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.598190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.598195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.598199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.598209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.608201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.608269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.608278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.608283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.608287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.624 [2024-12-09 11:44:53.608297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.624 qpair failed and we were unable to recover it. 00:30:01.624 [2024-12-09 11:44:53.618147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.624 [2024-12-09 11:44:53.618192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.624 [2024-12-09 11:44:53.618202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.624 [2024-12-09 11:44:53.618207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.624 [2024-12-09 11:44:53.618211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.618221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.628285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.628328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.628339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.628344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.628349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.628359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.638283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.638322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.638332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.638337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.638342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.638352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.648320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.648360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.648370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.648376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.648380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.648391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.658349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.658395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.658407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.658412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.658417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.658427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.668376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.668424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.668434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.668439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.668443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.668453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.678389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.678429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.678438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.678443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.678448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.678458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.688405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.688449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.688459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.688464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.688468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.688478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.698445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.698487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.698496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.698501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.698508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.698518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.708477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.708564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.708574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.708579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.708583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.708593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.718507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.718550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.718560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.718565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.718570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.718580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.728537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.728578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.728587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.728592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.728596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.728606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.738535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.738575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.738584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.738589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.738594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.625 [2024-12-09 11:44:53.738603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.625 qpair failed and we were unable to recover it. 00:30:01.625 [2024-12-09 11:44:53.748588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.625 [2024-12-09 11:44:53.748634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.625 [2024-12-09 11:44:53.748644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.625 [2024-12-09 11:44:53.748649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.625 [2024-12-09 11:44:53.748654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.626 [2024-12-09 11:44:53.748665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.626 qpair failed and we were unable to recover it. 00:30:01.626 [2024-12-09 11:44:53.758609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.626 [2024-12-09 11:44:53.758648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.626 [2024-12-09 11:44:53.758657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.626 [2024-12-09 11:44:53.758662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.626 [2024-12-09 11:44:53.758667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.626 [2024-12-09 11:44:53.758677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.626 qpair failed and we were unable to recover it. 00:30:01.626 [2024-12-09 11:44:53.768629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.626 [2024-12-09 11:44:53.768671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.626 [2024-12-09 11:44:53.768681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.626 [2024-12-09 11:44:53.768686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.626 [2024-12-09 11:44:53.768690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.626 [2024-12-09 11:44:53.768700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.626 qpair failed and we were unable to recover it. 00:30:01.626 [2024-12-09 11:44:53.778637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.626 [2024-12-09 11:44:53.778699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.626 [2024-12-09 11:44:53.778709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.626 [2024-12-09 11:44:53.778713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.626 [2024-12-09 11:44:53.778718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.626 [2024-12-09 11:44:53.778728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.626 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.788702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.788745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.788757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.788763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.788767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.788777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.798713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.798756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.798774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.798780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.798785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.798799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.808679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.808717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.808728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.808733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.808738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.808749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.818774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.818838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.818856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.818862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.818868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.818882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.828792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.828838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.828856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.828866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.828871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.828885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.838813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.838854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.838865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.838871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.838875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.838886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.848741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.848787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.848797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.848802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.848806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.848817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.858837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.858877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.858888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.858893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.858898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.858908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.868913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.868957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.868967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.868972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.868976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.868989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.878904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.878945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.878954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.878959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.878963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.878974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.888815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.888853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.888863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.888868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.888872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.888883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.898853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.898894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.898904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.888 [2024-12-09 11:44:53.898908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.888 [2024-12-09 11:44:53.898913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.888 [2024-12-09 11:44:53.898923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.888 qpair failed and we were unable to recover it. 00:30:01.888 [2024-12-09 11:44:53.909018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.888 [2024-12-09 11:44:53.909090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.888 [2024-12-09 11:44:53.909100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.909104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.909109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.909119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.919022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.919064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.919073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.919078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.919083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.919093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.929069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.929118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.929127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.929132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.929136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.929147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.939082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.939124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.939133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.939138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.939143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.939153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.949129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.949172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.949181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.949186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.949190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.949201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.959108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.959149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.959159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.959166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.959171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.959181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.969169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.969218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.969228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.969232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.969237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.969247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.979190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.979247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.979257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.979261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.979266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.979276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.989232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.989287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.989297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.989302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.989306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.989316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:53.999235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:53.999278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:53.999288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:53.999293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:53.999297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:53.999310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:54.009272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:54.009320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:54.009331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:54.009336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:54.009340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:54.009351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:54.019173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:54.019214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:54.019225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:54.019230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:54.019234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:54.019245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:54.029313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:54.029354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:54.029364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:54.029369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:54.029373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.889 [2024-12-09 11:44:54.029384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.889 qpair failed and we were unable to recover it. 00:30:01.889 [2024-12-09 11:44:54.039353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.889 [2024-12-09 11:44:54.039413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.889 [2024-12-09 11:44:54.039423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.889 [2024-12-09 11:44:54.039428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.889 [2024-12-09 11:44:54.039432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:01.890 [2024-12-09 11:44:54.039442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.890 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.049419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.049485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.049494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.049499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.049504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.049514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.059419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.059459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.059468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.059473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.059478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.059488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.069433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.069478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.069488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.069493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.069498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.069508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.079306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.079347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.079357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.079362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.079366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.079377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.089468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.089506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.089519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.089524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.089528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.089538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.099515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.099558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.099568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.099573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.099577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.099587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.109563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.109608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.109617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.109622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.109627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.109637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.119571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.119620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.119630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.119635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.119639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.119649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.129588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.129659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.129669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.129674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.129681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.129691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.139610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.139652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.139662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.139667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.139671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.139681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.149660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.149707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.149717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.152 [2024-12-09 11:44:54.149722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.152 [2024-12-09 11:44:54.149726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.152 [2024-12-09 11:44:54.149736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.152 qpair failed and we were unable to recover it. 00:30:02.152 [2024-12-09 11:44:54.159671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.152 [2024-12-09 11:44:54.159709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.152 [2024-12-09 11:44:54.159718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.159723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.159727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.159737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.169690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.169730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.169740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.169745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.169749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.169759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.179713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.179757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.179766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.179771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.179776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.179785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.189778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.189822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.189831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.189836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.189841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.189851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.199768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.199806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.199815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.199820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.199824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.199834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.209780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.209820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.209829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.209834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.209838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.209848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.219703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.219745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.219757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.219762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.219767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.219777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.229730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.229774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.229783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.229788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.229793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.229803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.239887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.239972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.239982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.239987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.239991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.240001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.249771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.249812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.249822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.249827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.249831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.249841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.259948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.259990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.260000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.260005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.260016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.260026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.269957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.269998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.270007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.270017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.270024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.270041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.279998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.280085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.280095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.280100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.280104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.280115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.153 [2024-12-09 11:44:54.290008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.153 [2024-12-09 11:44:54.290054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.153 [2024-12-09 11:44:54.290064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.153 [2024-12-09 11:44:54.290069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.153 [2024-12-09 11:44:54.290073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.153 [2024-12-09 11:44:54.290083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.153 qpair failed and we were unable to recover it. 00:30:02.154 [2024-12-09 11:44:54.300057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-12-09 11:44:54.300103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-12-09 11:44:54.300112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-12-09 11:44:54.300117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-12-09 11:44:54.300121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.154 [2024-12-09 11:44:54.300132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.154 [2024-12-09 11:44:54.310066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.154 [2024-12-09 11:44:54.310110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.154 [2024-12-09 11:44:54.310120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.154 [2024-12-09 11:44:54.310125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.154 [2024-12-09 11:44:54.310129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.154 [2024-12-09 11:44:54.310139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.154 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.320094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.320135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.320144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.320149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.320153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.320164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.330114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.330169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.330178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.330183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.330187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.330197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.340143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.340186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.340196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.340201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.340206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.340216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.350192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.350273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.350285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.350290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.350294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.350304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.360211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.360251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.360260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.360265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.360269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.360279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.370241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.370280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.370290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.370294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.370299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.370309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.380274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.380314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.380324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.380329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.380333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.380343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.390311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.390380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.416 [2024-12-09 11:44:54.390390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.416 [2024-12-09 11:44:54.390397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.416 [2024-12-09 11:44:54.390402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.416 [2024-12-09 11:44:54.390412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.416 qpair failed and we were unable to recover it. 00:30:02.416 [2024-12-09 11:44:54.400333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.416 [2024-12-09 11:44:54.400373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.400382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.400386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.400391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.400401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.410329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.410377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.410386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.410390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.410395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.410405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.420349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.420392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.420401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.420406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.420410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.420421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.430450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.430524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.430533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.430538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.430542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.430558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.440404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.440438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.440448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.440453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.440457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.440467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.450450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.450490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.450500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.450504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.450509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.450518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.460343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.460383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.460395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.460400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.460405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.460415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.470508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.470551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.470561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.470566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.470571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.470581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.480400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.480463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.480473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.480478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.480482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.480493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.490426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.490463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.490472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.490477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.490482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.490492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.500445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.500485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.500495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.500499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.500504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.500514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.510486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.510555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.510565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.510570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.510574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.510585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.520636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.520675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.520684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.520692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.520696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.417 [2024-12-09 11:44:54.520706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.417 qpair failed and we were unable to recover it. 00:30:02.417 [2024-12-09 11:44:54.530658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.417 [2024-12-09 11:44:54.530702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.417 [2024-12-09 11:44:54.530711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.417 [2024-12-09 11:44:54.530716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.417 [2024-12-09 11:44:54.530720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.418 [2024-12-09 11:44:54.530730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.418 qpair failed and we were unable to recover it. 00:30:02.418 [2024-12-09 11:44:54.540698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.418 [2024-12-09 11:44:54.540741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.418 [2024-12-09 11:44:54.540750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.418 [2024-12-09 11:44:54.540755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.418 [2024-12-09 11:44:54.540759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.418 [2024-12-09 11:44:54.540769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.418 qpair failed and we were unable to recover it. 00:30:02.418 [2024-12-09 11:44:54.550703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.418 [2024-12-09 11:44:54.550747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.418 [2024-12-09 11:44:54.550756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.418 [2024-12-09 11:44:54.550761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.418 [2024-12-09 11:44:54.550765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.418 [2024-12-09 11:44:54.550775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.418 qpair failed and we were unable to recover it. 00:30:02.418 [2024-12-09 11:44:54.560735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.418 [2024-12-09 11:44:54.560773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.418 [2024-12-09 11:44:54.560783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.418 [2024-12-09 11:44:54.560788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.418 [2024-12-09 11:44:54.560792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.418 [2024-12-09 11:44:54.560805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.418 qpair failed and we were unable to recover it. 00:30:02.418 [2024-12-09 11:44:54.570629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.418 [2024-12-09 11:44:54.570667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.418 [2024-12-09 11:44:54.570677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.418 [2024-12-09 11:44:54.570682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.418 [2024-12-09 11:44:54.570686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.418 [2024-12-09 11:44:54.570697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.418 qpair failed and we were unable to recover it. 00:30:02.679 [2024-12-09 11:44:54.580776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-12-09 11:44:54.580837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.679 [2024-12-09 11:44:54.580847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.679 [2024-12-09 11:44:54.580852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.679 [2024-12-09 11:44:54.580856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.679 [2024-12-09 11:44:54.580866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.679 qpair failed and we were unable to recover it. 00:30:02.679 [2024-12-09 11:44:54.590847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.679 [2024-12-09 11:44:54.590894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.590912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.590918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.590923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.590937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.600859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.600899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.600910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.600915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.600920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.600931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.610878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.610914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.610924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.610929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.610934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.610944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.620772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.620813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.620823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.620828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.620832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.620843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.630956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.631003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.631016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.631021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.631025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.631035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.640860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.640905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.640914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.640919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.640923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.640933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.650992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.651072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.651084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.651089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.651093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.651103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.661024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.661066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.661076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.661081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.661085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.661095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.671031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.671078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.671087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.671092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.671096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.671107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.681065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.681103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.681112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.681117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.681122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.681132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.691084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.691125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.691135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.691140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.691147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.691158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.701122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.701166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.701176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.701181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.701185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.701196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.711160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.680 [2024-12-09 11:44:54.711206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.680 [2024-12-09 11:44:54.711215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.680 [2024-12-09 11:44:54.711221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.680 [2024-12-09 11:44:54.711225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.680 [2024-12-09 11:44:54.711236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.680 qpair failed and we were unable to recover it. 00:30:02.680 [2024-12-09 11:44:54.721181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.721222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.721231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.721236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.721241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.721251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.731209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.731298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.731307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.731313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.731317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.731328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.741249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.741343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.741353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.741358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.741362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.741372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.751239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.751282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.751291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.751296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.751300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.751310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.761309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.761380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.761390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.761396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.761400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.761410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.771327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.771366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.771376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.771380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.771385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.771395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.781227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.781268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.781280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.781285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.781289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.781299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.791248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.791295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.791305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.791310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.791314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.791325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.801263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.801302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.801311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.801316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.801320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.801330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.811448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.811485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.811494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.811499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.811504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.811513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.821318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.821358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.821368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.821373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.821380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.821390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.681 [2024-12-09 11:44:54.831484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.681 [2024-12-09 11:44:54.831527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.681 [2024-12-09 11:44:54.831536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.681 [2024-12-09 11:44:54.831541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.681 [2024-12-09 11:44:54.831546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.681 [2024-12-09 11:44:54.831556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.681 qpair failed and we were unable to recover it. 00:30:02.943 [2024-12-09 11:44:54.841407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.943 [2024-12-09 11:44:54.841466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.943 [2024-12-09 11:44:54.841475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.943 [2024-12-09 11:44:54.841480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.943 [2024-12-09 11:44:54.841484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.943 [2024-12-09 11:44:54.841494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-12-09 11:44:54.851538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.943 [2024-12-09 11:44:54.851581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.943 [2024-12-09 11:44:54.851591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.943 [2024-12-09 11:44:54.851596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.943 [2024-12-09 11:44:54.851600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.943 [2024-12-09 11:44:54.851610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-12-09 11:44:54.861576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.943 [2024-12-09 11:44:54.861616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.943 [2024-12-09 11:44:54.861626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.943 [2024-12-09 11:44:54.861631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.943 [2024-12-09 11:44:54.861636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.943 [2024-12-09 11:44:54.861646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-12-09 11:44:54.871598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.943 [2024-12-09 11:44:54.871643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.943 [2024-12-09 11:44:54.871652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.943 [2024-12-09 11:44:54.871657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.943 [2024-12-09 11:44:54.871661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.943 [2024-12-09 11:44:54.871672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.881648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.881730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.881740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.881745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.881749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.881759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.891632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.891684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.891693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.891698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.891703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.891713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.901670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.901712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.901723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.901728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.901733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.901743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.911626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.911687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.911699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.911704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.911708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.911718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.921740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.921786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.921804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.921810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.921815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.921829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.931756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.931797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.931816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.931822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.931827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.931840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.941780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.941827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.941845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.941851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.941856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.941870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.951791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.951837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.951847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.951856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.951861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.951872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.961831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.961868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.961878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.961882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.961887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.961897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.971876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.971914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.971923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.971928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.971932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.971942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.981903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.981946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.981956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.981961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.981965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.981975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:54.991900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:54.991942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:54.991952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:54.991957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:54.991961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:54.991974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-12-09 11:44:55.002001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.944 [2024-12-09 11:44:55.002071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.944 [2024-12-09 11:44:55.002081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.944 [2024-12-09 11:44:55.002086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.944 [2024-12-09 11:44:55.002090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.944 [2024-12-09 11:44:55.002100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.012001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.012043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.012053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.012058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.012062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.012072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.022006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.022054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.022064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.022068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.022073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.022083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.032053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.032095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.032104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.032109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.032114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.032124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.042061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.042106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.042115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.042120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.042124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.042135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.052088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.052129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.052138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.052143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.052148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.052158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.062127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.062169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.062179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.062184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.062188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.062199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.072185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.072227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.072236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.072241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.072246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.072256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.082173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.082216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.082226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.082233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.082238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.082248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.092168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.092208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.092218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.092223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.092227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.092237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-12-09 11:44:55.102216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.945 [2024-12-09 11:44:55.102269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.945 [2024-12-09 11:44:55.102280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.945 [2024-12-09 11:44:55.102285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.945 [2024-12-09 11:44:55.102289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:02.945 [2024-12-09 11:44:55.102300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.945 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.112239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.112282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.112292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.112297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.112301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.112311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.122274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.122354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.122364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.122369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.122373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.122386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.132175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.132212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.132223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.132228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.132232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.132243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.142321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.142363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.142373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.142377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.142382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.142392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.152371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.152417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.152427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.152431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.152436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.152446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.162455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.162513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.162523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.162527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.162532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0028000b90 00:30:03.207 [2024-12-09 11:44:55.162542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.172386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.172439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.172464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.172473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.172480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.207 [2024-12-09 11:44:55.172499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.182317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.182368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.182383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.182390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.182396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.207 [2024-12-09 11:44:55.182412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.192350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.192395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.192408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.192416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.192422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.207 [2024-12-09 11:44:55.192437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.202488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.202577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.202590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.202597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.202603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.207 [2024-12-09 11:44:55.202617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.212514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.212559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.212580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.212587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.212594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.207 [2024-12-09 11:44:55.212608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.207 qpair failed and we were unable to recover it. 00:30:03.207 [2024-12-09 11:44:55.222557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.207 [2024-12-09 11:44:55.222608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.207 [2024-12-09 11:44:55.222622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.207 [2024-12-09 11:44:55.222628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.207 [2024-12-09 11:44:55.222635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.222648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.232644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.232717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.232730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.232737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.232743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.232757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.242602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.242652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.242677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.242685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.242693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.242712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.252691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.252751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.252775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.252784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.252795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.252816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.262655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.262751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.262776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.262785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.262792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.262811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.272694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.272744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.272759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.272767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.272774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.272789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.282583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.282626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.282643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.282650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.282657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.282672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.292748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.292794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.292809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.292816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.292822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.292837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.302768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.302817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.302830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.302838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.302844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.302858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.312698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.312747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.312760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.312767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.312773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.312787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.322737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.322794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.322810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.322817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.322824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.322840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.332838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.332898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.332912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.332920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.332926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.332940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.342887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.342938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.342967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.342976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.342983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.343002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.352928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.352982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.352997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.353004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.353015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.353031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.208 [2024-12-09 11:44:55.362920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.208 [2024-12-09 11:44:55.362978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.208 [2024-12-09 11:44:55.362991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.208 [2024-12-09 11:44:55.362998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.208 [2024-12-09 11:44:55.363005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.208 [2024-12-09 11:44:55.363022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.208 qpair failed and we were unable to recover it. 00:30:03.469 [2024-12-09 11:44:55.372951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.469 [2024-12-09 11:44:55.372994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.469 [2024-12-09 11:44:55.373008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.469 [2024-12-09 11:44:55.373019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.469 [2024-12-09 11:44:55.373025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.373040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.382989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.383058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.383072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.383079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.383090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.383104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.393029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.393081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.393095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.393102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.393109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.393123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.403038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.403086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.403099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.403106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.403112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.403126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.413059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.413104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.413117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.413124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.413131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.413145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.423063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.423110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.423123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.423131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.423138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.423152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.433129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.433179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.433192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.433199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.433206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.433220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.443036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.443083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.443099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.443106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.443112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.443127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.453036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.470 [2024-12-09 11:44:55.453078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.470 [2024-12-09 11:44:55.453092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.470 [2024-12-09 11:44:55.453099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.470 [2024-12-09 11:44:55.453106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a0490 00:30:03.470 [2024-12-09 11:44:55.453120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.470 qpair failed and we were unable to recover it. 00:30:03.470 [2024-12-09 11:44:55.453271] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:03.470 A controller has encountered a failure and is being reset. 00:30:03.470 [2024-12-09 11:44:55.453310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189d030 (9): Bad file descriptor 00:30:03.732 Controller properly reset. 00:30:03.732 Initializing NVMe Controllers 00:30:03.732 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:03.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:03.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:03.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:03.732 Initialization complete. Launching workers. 00:30:03.732 Starting thread on core 1 00:30:03.732 Starting thread on core 2 00:30:03.732 Starting thread on core 3 00:30:03.732 Starting thread on core 0 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:03.732 00:30:03.732 real 0m11.524s 00:30:03.732 user 0m21.686s 00:30:03.732 sys 0m3.798s 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.732 ************************************ 00:30:03.732 END TEST nvmf_target_disconnect_tc2 00:30:03.732 ************************************ 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.732 rmmod nvme_tcp 00:30:03.732 rmmod nvme_fabrics 00:30:03.732 rmmod nvme_keyring 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3719268 ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3719268 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3719268 ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3719268 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719268 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719268' 00:30:03.732 killing process with pid 3719268 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3719268 00:30:03.732 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3719268 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.993 11:44:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.904 11:44:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.904 00:30:05.904 real 0m21.771s 00:30:05.904 user 0m50.089s 00:30:05.904 sys 0m9.800s 00:30:05.904 11:44:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.904 11:44:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:05.904 ************************************ 00:30:05.904 END TEST nvmf_target_disconnect 00:30:05.904 ************************************ 00:30:06.165 11:44:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:06.165 00:30:06.165 real 6m32.049s 00:30:06.165 user 11m20.270s 00:30:06.165 sys 2m13.042s 00:30:06.165 11:44:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.165 11:44:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.165 ************************************ 00:30:06.165 END TEST nvmf_host 00:30:06.165 ************************************ 00:30:06.165 11:44:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:06.165 11:44:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:06.165 11:44:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.165 11:44:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.165 11:44:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.165 11:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.165 ************************************ 00:30:06.165 START TEST nvmf_target_core_interrupt_mode 00:30:06.165 ************************************ 00:30:06.165 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.165 * Looking for test storage... 00:30:06.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:06.165 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.165 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.165 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.427 --rc genhtml_branch_coverage=1 00:30:06.427 --rc genhtml_function_coverage=1 00:30:06.427 --rc genhtml_legend=1 00:30:06.427 --rc geninfo_all_blocks=1 00:30:06.427 --rc geninfo_unexecuted_blocks=1 00:30:06.427 00:30:06.427 ' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.427 --rc genhtml_branch_coverage=1 00:30:06.427 --rc genhtml_function_coverage=1 00:30:06.427 --rc genhtml_legend=1 00:30:06.427 --rc geninfo_all_blocks=1 00:30:06.427 --rc geninfo_unexecuted_blocks=1 00:30:06.427 00:30:06.427 ' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.427 --rc genhtml_branch_coverage=1 00:30:06.427 --rc genhtml_function_coverage=1 00:30:06.427 --rc genhtml_legend=1 00:30:06.427 --rc geninfo_all_blocks=1 00:30:06.427 --rc geninfo_unexecuted_blocks=1 00:30:06.427 00:30:06.427 ' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.427 --rc genhtml_branch_coverage=1 00:30:06.427 --rc genhtml_function_coverage=1 00:30:06.427 --rc genhtml_legend=1 00:30:06.427 --rc geninfo_all_blocks=1 00:30:06.427 --rc geninfo_unexecuted_blocks=1 00:30:06.427 00:30:06.427 ' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.427 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.428 ************************************ 00:30:06.428 START TEST nvmf_abort 00:30:06.428 ************************************ 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.428 * Looking for test storage... 00:30:06.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.428 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.689 --rc genhtml_branch_coverage=1 00:30:06.689 --rc genhtml_function_coverage=1 00:30:06.689 --rc genhtml_legend=1 00:30:06.689 --rc geninfo_all_blocks=1 00:30:06.689 --rc geninfo_unexecuted_blocks=1 00:30:06.689 00:30:06.689 ' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.689 --rc genhtml_branch_coverage=1 00:30:06.689 --rc genhtml_function_coverage=1 00:30:06.689 --rc genhtml_legend=1 00:30:06.689 --rc geninfo_all_blocks=1 00:30:06.689 --rc geninfo_unexecuted_blocks=1 00:30:06.689 00:30:06.689 ' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.689 --rc genhtml_branch_coverage=1 00:30:06.689 --rc genhtml_function_coverage=1 00:30:06.689 --rc genhtml_legend=1 00:30:06.689 --rc geninfo_all_blocks=1 00:30:06.689 --rc geninfo_unexecuted_blocks=1 00:30:06.689 00:30:06.689 ' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.689 --rc genhtml_branch_coverage=1 00:30:06.689 --rc genhtml_function_coverage=1 00:30:06.689 --rc genhtml_legend=1 00:30:06.689 --rc geninfo_all_blocks=1 00:30:06.689 --rc geninfo_unexecuted_blocks=1 00:30:06.689 00:30:06.689 ' 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.689 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.690 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:14.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:14.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:14.827 Found net devices under 0000:31:00.0: cvl_0_0 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.827 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:14.828 Found net devices under 0000:31:00.1: cvl_0_1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.828 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:30:14.828 00:30:14.828 --- 10.0.0.2 ping statistics --- 00:30:14.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.828 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:30:14.828 00:30:14.828 --- 10.0.0.1 ping statistics --- 00:30:14.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.828 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3724866 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3724866 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3724866 ']' 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.828 [2024-12-09 11:45:06.169703] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:14.828 [2024-12-09 11:45:06.170738] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:30:14.828 [2024-12-09 11:45:06.170777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.828 [2024-12-09 11:45:06.270663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.828 [2024-12-09 11:45:06.321950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.828 [2024-12-09 11:45:06.322002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.828 [2024-12-09 11:45:06.322018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.828 [2024-12-09 11:45:06.322026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.828 [2024-12-09 11:45:06.322032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.828 [2024-12-09 11:45:06.324069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.828 [2024-12-09 11:45:06.324318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.828 [2024-12-09 11:45:06.324321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.828 [2024-12-09 11:45:06.402575] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:14.828 [2024-12-09 11:45:06.402659] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:14.828 [2024-12-09 11:45:06.403282] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:14.828 [2024-12-09 11:45:06.403570] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.828 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 [2024-12-09 11:45:07.029378] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 Malloc0 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 Delay0 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 [2024-12-09 11:45:07.125318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.089 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:15.089 [2024-12-09 11:45:07.201937] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:17.633 Initializing NVMe Controllers 00:30:17.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:17.633 controller IO queue size 128 less than required 00:30:17.633 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:17.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:17.633 Initialization complete. Launching workers. 00:30:17.633 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28566 00:30:17.633 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28623, failed to submit 66 00:30:17.633 success 28566, unsuccessful 57, failed 0 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.633 rmmod nvme_tcp 00:30:17.633 rmmod nvme_fabrics 00:30:17.633 rmmod nvme_keyring 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3724866 ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3724866 ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724866' 00:30:17.633 killing process with pid 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3724866 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.633 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.547 00:30:19.547 real 0m13.199s 00:30:19.547 user 0m10.468s 00:30:19.547 sys 0m6.850s 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:19.547 ************************************ 00:30:19.547 END TEST nvmf_abort 00:30:19.547 ************************************ 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.547 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.810 ************************************ 00:30:19.810 START TEST nvmf_ns_hotplug_stress 00:30:19.810 ************************************ 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:19.810 * Looking for test storage... 00:30:19.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:19.810 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.811 --rc genhtml_branch_coverage=1 00:30:19.811 --rc genhtml_function_coverage=1 00:30:19.811 --rc genhtml_legend=1 00:30:19.811 --rc geninfo_all_blocks=1 00:30:19.811 --rc geninfo_unexecuted_blocks=1 00:30:19.811 00:30:19.811 ' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.811 --rc genhtml_branch_coverage=1 00:30:19.811 --rc genhtml_function_coverage=1 00:30:19.811 --rc genhtml_legend=1 00:30:19.811 --rc geninfo_all_blocks=1 00:30:19.811 --rc geninfo_unexecuted_blocks=1 00:30:19.811 00:30:19.811 ' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.811 --rc genhtml_branch_coverage=1 00:30:19.811 --rc genhtml_function_coverage=1 00:30:19.811 --rc genhtml_legend=1 00:30:19.811 --rc geninfo_all_blocks=1 00:30:19.811 --rc geninfo_unexecuted_blocks=1 00:30:19.811 00:30:19.811 ' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.811 --rc genhtml_branch_coverage=1 00:30:19.811 --rc genhtml_function_coverage=1 00:30:19.811 --rc genhtml_legend=1 00:30:19.811 --rc geninfo_all_blocks=1 00:30:19.811 --rc geninfo_unexecuted_blocks=1 00:30:19.811 00:30:19.811 ' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.811 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.073 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.073 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.073 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.073 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:28.217 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.217 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:28.218 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:28.218 Found net devices under 0000:31:00.0: cvl_0_0 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:28.218 Found net devices under 0000:31:00.1: cvl_0_1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:30:28.218 00:30:28.218 --- 10.0.0.2 ping statistics --- 00:30:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.218 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:30:28.218 00:30:28.218 --- 10.0.0.1 ping statistics --- 00:30:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.218 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3730322 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3730322 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3730322 ']' 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.218 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.219 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.219 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.219 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.219 [2024-12-09 11:45:19.501228] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.219 [2024-12-09 11:45:19.502252] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:30:28.219 [2024-12-09 11:45:19.502293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.219 [2024-12-09 11:45:19.599837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.219 [2024-12-09 11:45:19.647429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.219 [2024-12-09 11:45:19.647482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.219 [2024-12-09 11:45:19.647491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.219 [2024-12-09 11:45:19.647498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.219 [2024-12-09 11:45:19.647504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.219 [2024-12-09 11:45:19.649312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.219 [2024-12-09 11:45:19.649635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.219 [2024-12-09 11:45:19.649636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.219 [2024-12-09 11:45:19.726805] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.219 [2024-12-09 11:45:19.726878] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.219 [2024-12-09 11:45:19.727489] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:28.219 [2024-12-09 11:45:19.727790] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:28.219 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:28.479 [2024-12-09 11:45:20.490690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.479 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:28.739 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.739 [2024-12-09 11:45:20.867417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.739 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:29.000 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:29.260 Malloc0 00:30:29.261 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:29.261 Delay0 00:30:29.521 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.522 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:29.782 NULL1 00:30:29.782 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:30.042 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3730754 00:30:30.042 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:30.042 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:30.042 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.042 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.302 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:30.302 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:30.562 true 00:30:30.562 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:30.562 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.824 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.824 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:30.824 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:31.085 true 00:30:31.085 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:31.085 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.344 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.344 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:31.344 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:31.603 true 00:30:31.603 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:31.603 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.862 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.123 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:32.123 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:32.123 true 00:30:32.123 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:32.123 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.384 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.649 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:32.649 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:32.649 true 00:30:32.649 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:32.912 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.912 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.171 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:33.171 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:33.431 true 00:30:33.431 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:33.431 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.431 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.692 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:33.692 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:33.952 true 00:30:33.952 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:33.952 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.952 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.213 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:34.213 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:34.473 true 00:30:34.473 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:34.473 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.734 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.734 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:34.734 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:34.994 true 00:30:34.994 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:34.994 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.253 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.253 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:35.253 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:35.512 true 00:30:35.512 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:35.512 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.771 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.030 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:36.030 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:36.030 true 00:30:36.030 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:36.030 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.289 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.549 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:36.549 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:36.549 true 00:30:36.549 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:36.549 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.810 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.070 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:37.070 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:37.070 true 00:30:37.330 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:37.330 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.330 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.589 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:37.589 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:37.849 true 00:30:37.849 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:37.850 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.850 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.111 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:38.111 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:38.372 true 00:30:38.372 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:38.372 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.633 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.633 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:38.633 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:38.893 true 00:30:38.893 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:38.893 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.153 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.153 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:39.153 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:39.415 true 00:30:39.415 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:39.415 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.675 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.935 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:39.935 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:39.935 true 00:30:39.935 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:39.935 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.196 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.457 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:40.457 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:40.457 true 00:30:40.457 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:40.457 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.719 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.979 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:40.979 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:40.979 true 00:30:40.979 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:40.979 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.239 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.499 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:41.499 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:41.499 true 00:30:41.760 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:41.760 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.760 11:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.021 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:42.021 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:42.281 true 00:30:42.281 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:42.281 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.281 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.542 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:42.542 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:42.802 true 00:30:42.802 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:42.802 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.063 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.063 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:43.063 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:43.323 true 00:30:43.323 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:43.323 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.583 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.583 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:43.583 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:43.843 true 00:30:43.844 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:43.844 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.105 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.105 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:44.105 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:44.365 true 00:30:44.365 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:44.365 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.626 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.886 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:44.886 11:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:44.886 true 00:30:44.886 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:44.886 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.147 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.408 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:45.408 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:45.408 true 00:30:45.408 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:45.408 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.668 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.928 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:45.928 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:46.188 true 00:30:46.188 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:46.188 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.188 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.448 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:46.448 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:46.709 true 00:30:46.709 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:46.709 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.983 11:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.983 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:46.983 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:47.256 true 00:30:47.256 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:47.256 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.538 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.538 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:47.538 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:47.829 true 00:30:47.829 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:47.829 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.107 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.107 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:48.107 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:48.384 true 00:30:48.384 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:48.384 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.669 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.669 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:48.669 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:48.984 true 00:30:48.984 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:48.984 11:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.984 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.264 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:49.264 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:49.538 true 00:30:49.538 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:49.538 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.538 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.814 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:49.814 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:49.814 true 00:30:50.113 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:50.113 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.113 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.428 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:50.428 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:50.428 true 00:30:50.722 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:50.722 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.722 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.008 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:51.008 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:51.008 true 00:30:51.008 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:51.008 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.292 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.555 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:51.555 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:51.555 true 00:30:51.555 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:51.555 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.816 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.076 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:52.076 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:52.076 true 00:30:52.076 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:52.076 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.337 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.598 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:52.598 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:52.598 true 00:30:52.860 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:52.860 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.860 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.126 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:53.126 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:53.387 true 00:30:53.387 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:53.387 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.387 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.647 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:53.647 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:53.908 true 00:30:53.908 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:53.908 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.167 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.167 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:54.167 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:54.428 true 00:30:54.428 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:54.428 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.689 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.689 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:54.689 11:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:54.950 true 00:30:54.950 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:54.950 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.211 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.473 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:55.473 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:55.473 true 00:30:55.473 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:55.473 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.734 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.995 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:55.995 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:55.995 true 00:30:55.995 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:55.995 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.256 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.517 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:56.517 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:56.517 true 00:30:56.778 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:56.778 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.778 11:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.038 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:57.038 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:57.299 true 00:30:57.299 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:57.299 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.299 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.559 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:57.559 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:57.818 true 00:30:57.818 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:57.818 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.078 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.078 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:58.078 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:58.339 true 00:30:58.339 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:58.339 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.599 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.599 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:58.599 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:58.860 true 00:30:58.860 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:58.860 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.121 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.121 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:59.122 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:59.382 true 00:30:59.382 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:59.382 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.644 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.904 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:59.904 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:59.904 true 00:30:59.904 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:30:59.904 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.166 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.166 Initializing NVMe Controllers 00:31:00.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.166 Controller IO queue size 128, less than required. 00:31:00.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:00.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.166 Initialization complete. Launching workers. 00:31:00.166 ======================================================== 00:31:00.166 Latency(us) 00:31:00.166 Device Information : IOPS MiB/s Average min max 00:31:00.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29685.24 14.49 4311.89 1478.71 10927.47 00:31:00.166 ======================================================== 00:31:00.166 Total : 29685.24 14.49 4311.89 1478.71 10927.47 00:31:00.166 00:31:00.437 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:00.437 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:00.437 true 00:31:00.437 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3730754 00:31:00.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3730754) - No such process 00:31:00.437 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3730754 00:31:00.437 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.698 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:00.959 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:00.959 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:00.959 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:00.959 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.959 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:00.959 null0 00:31:00.959 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.959 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.959 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:01.219 null1 00:31:01.219 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.219 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.219 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:01.480 null2 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:01.480 null3 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.480 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:01.740 null4 00:31:01.740 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.740 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.740 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:02.001 null5 00:31:02.001 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.001 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.001 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:02.001 null6 00:31:02.001 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.001 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.001 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:02.263 null7 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.263 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3737199 3737203 3737204 3737208 3737212 3737215 3737218 3737221 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.264 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.525 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.785 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.786 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.047 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.047 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.307 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.307 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.307 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.307 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.307 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.308 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.568 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.569 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.830 11:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.092 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.354 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.355 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.616 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.877 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.877 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.877 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.877 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.877 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.877 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.138 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.399 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.400 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.400 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.400 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.400 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.400 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.661 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.662 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.923 11:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.923 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.923 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.923 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.923 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.184 rmmod nvme_tcp 00:31:06.184 rmmod nvme_fabrics 00:31:06.184 rmmod nvme_keyring 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3730322 ']' 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3730322 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3730322 ']' 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3730322 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3730322 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3730322' 00:31:06.184 killing process with pid 3730322 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3730322 00:31:06.184 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3730322 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.445 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.989 00:31:08.989 real 0m48.839s 00:31:08.989 user 3m2.478s 00:31:08.989 sys 0m21.824s 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:08.989 ************************************ 00:31:08.989 END TEST nvmf_ns_hotplug_stress 00:31:08.989 ************************************ 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.989 ************************************ 00:31:08.989 START TEST nvmf_delete_subsystem 00:31:08.989 ************************************ 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.989 * Looking for test storage... 00:31:08.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.989 --rc genhtml_branch_coverage=1 00:31:08.989 --rc genhtml_function_coverage=1 00:31:08.989 --rc genhtml_legend=1 00:31:08.989 --rc geninfo_all_blocks=1 00:31:08.989 --rc geninfo_unexecuted_blocks=1 00:31:08.989 00:31:08.989 ' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.989 --rc genhtml_branch_coverage=1 00:31:08.989 --rc genhtml_function_coverage=1 00:31:08.989 --rc genhtml_legend=1 00:31:08.989 --rc geninfo_all_blocks=1 00:31:08.989 --rc geninfo_unexecuted_blocks=1 00:31:08.989 00:31:08.989 ' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.989 --rc genhtml_branch_coverage=1 00:31:08.989 --rc genhtml_function_coverage=1 00:31:08.989 --rc genhtml_legend=1 00:31:08.989 --rc geninfo_all_blocks=1 00:31:08.989 --rc geninfo_unexecuted_blocks=1 00:31:08.989 00:31:08.989 ' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:08.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.989 --rc genhtml_branch_coverage=1 00:31:08.989 --rc genhtml_function_coverage=1 00:31:08.989 --rc genhtml_legend=1 00:31:08.989 --rc geninfo_all_blocks=1 00:31:08.989 --rc geninfo_unexecuted_blocks=1 00:31:08.989 00:31:08.989 ' 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.989 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.990 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:17.129 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:17.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:17.129 Found net devices under 0000:31:00.0: cvl_0_0 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.129 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:17.130 Found net devices under 0000:31:00.1: cvl_0_1 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.130 11:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:31:17.130 00:31:17.130 --- 10.0.0.2 ping statistics --- 00:31:17.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.130 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:31:17.130 00:31:17.130 --- 10.0.0.1 ping statistics --- 00:31:17.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.130 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3742250 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3742250 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3742250 ']' 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.130 11:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.130 [2024-12-09 11:46:08.316155] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.130 [2024-12-09 11:46:08.317310] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:31:17.130 [2024-12-09 11:46:08.317364] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.130 [2024-12-09 11:46:08.402612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.130 [2024-12-09 11:46:08.443675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.130 [2024-12-09 11:46:08.443710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.130 [2024-12-09 11:46:08.443718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.130 [2024-12-09 11:46:08.443725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.130 [2024-12-09 11:46:08.443731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.130 [2024-12-09 11:46:08.444985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.130 [2024-12-09 11:46:08.444988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.130 [2024-12-09 11:46:08.502082] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.130 [2024-12-09 11:46:08.502660] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.130 [2024-12-09 11:46:08.502972] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.130 [2024-12-09 11:46:09.145683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.130 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.131 [2024-12-09 11:46:09.173935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.131 NULL1 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.131 Delay0 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3742527 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:17.131 11:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:17.131 [2024-12-09 11:46:09.271062] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:19.676 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.676 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.676 11:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 [2024-12-09 11:46:11.321784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3f00 is same with the state(6) to be set 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Write completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.676 starting I/O failed: -6 00:31:19.676 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 starting I/O failed: -6 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 [2024-12-09 11:46:11.324857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f68a000d350 is same with the state(6) to be set 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Write completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:19.677 Read completed with error (sct=0, sc=8) 00:31:20.250 [2024-12-09 11:46:12.286844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(6) to be set 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 [2024-12-09 11:46:12.325153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e40e0 is same with the state(6) to be set 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Read completed with error (sct=0, sc=8) 00:31:20.250 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 [2024-12-09 11:46:12.325937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e44a0 is same with the state(6) to be set 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 [2024-12-09 11:46:12.327332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f68a000d020 is same with the state(6) to be set 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Read completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 Write completed with error (sct=0, sc=8) 00:31:20.251 [2024-12-09 11:46:12.327674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f68a000d680 is same with the state(6) to be set 00:31:20.251 Initializing NVMe Controllers 00:31:20.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.251 Controller IO queue size 128, less than required. 00:31:20.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:20.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:20.251 Initialization complete. Launching workers. 00:31:20.251 ======================================================== 00:31:20.251 Latency(us) 00:31:20.251 Device Information : IOPS MiB/s Average min max 00:31:20.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.75 0.08 890441.33 257.12 1008026.97 00:31:20.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.79 0.08 909413.76 275.09 1009789.65 00:31:20.251 ======================================================== 00:31:20.251 Total : 334.53 0.16 899673.45 257.12 1009789.65 00:31:20.251 00:31:20.251 [2024-12-09 11:46:12.328258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e55f0 (9): Bad file descriptor 00:31:20.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:20.251 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.251 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:20.251 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3742527 00:31:20.251 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3742527 00:31:20.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3742527) - No such process 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3742527 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3742527 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3742527 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.823 [2024-12-09 11:46:12.862262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3743196 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:20.823 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:20.823 [2024-12-09 11:46:12.933403] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:21.396 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.396 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:21.396 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.969 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.969 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:21.969 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:22.540 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.800 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.800 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:22.800 11:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.370 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.370 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:23.370 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.941 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.941 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:23.941 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.941 Initializing NVMe Controllers 00:31:23.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.941 Controller IO queue size 128, less than required. 00:31:23.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:23.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:23.941 Initialization complete. Launching workers. 00:31:23.941 ======================================================== 00:31:23.941 Latency(us) 00:31:23.941 Device Information : IOPS MiB/s Average min max 00:31:23.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002165.87 1000242.83 1005500.19 00:31:23.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003955.79 1000371.59 1010211.32 00:31:23.941 ======================================================== 00:31:23.941 Total : 256.00 0.12 1003060.83 1000242.83 1010211.32 00:31:23.941 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743196 00:31:24.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3743196) - No such process 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3743196 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.512 rmmod nvme_tcp 00:31:24.512 rmmod nvme_fabrics 00:31:24.512 rmmod nvme_keyring 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3742250 ']' 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3742250 00:31:24.512 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3742250 ']' 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3742250 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742250 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742250' 00:31:24.513 killing process with pid 3742250 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3742250 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3742250 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:24.513 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:24.772 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.772 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.773 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.773 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.773 11:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.685 00:31:26.685 real 0m18.089s 00:31:26.685 user 0m26.199s 00:31:26.685 sys 0m7.236s 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.685 ************************************ 00:31:26.685 END TEST nvmf_delete_subsystem 00:31:26.685 ************************************ 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.685 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.685 ************************************ 00:31:26.685 START TEST nvmf_host_management 00:31:26.685 ************************************ 00:31:26.686 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.948 * Looking for test storage... 00:31:26.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.948 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:26.948 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:26.948 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.948 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:26.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.948 --rc genhtml_branch_coverage=1 00:31:26.948 --rc genhtml_function_coverage=1 00:31:26.948 --rc genhtml_legend=1 00:31:26.948 --rc geninfo_all_blocks=1 00:31:26.948 --rc geninfo_unexecuted_blocks=1 00:31:26.948 00:31:26.948 ' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:26.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.949 --rc genhtml_branch_coverage=1 00:31:26.949 --rc genhtml_function_coverage=1 00:31:26.949 --rc genhtml_legend=1 00:31:26.949 --rc geninfo_all_blocks=1 00:31:26.949 --rc geninfo_unexecuted_blocks=1 00:31:26.949 00:31:26.949 ' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:26.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.949 --rc genhtml_branch_coverage=1 00:31:26.949 --rc genhtml_function_coverage=1 00:31:26.949 --rc genhtml_legend=1 00:31:26.949 --rc geninfo_all_blocks=1 00:31:26.949 --rc geninfo_unexecuted_blocks=1 00:31:26.949 00:31:26.949 ' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:26.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.949 --rc genhtml_branch_coverage=1 00:31:26.949 --rc genhtml_function_coverage=1 00:31:26.949 --rc genhtml_legend=1 00:31:26.949 --rc geninfo_all_blocks=1 00:31:26.949 --rc geninfo_unexecuted_blocks=1 00:31:26.949 00:31:26.949 ' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.949 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.097 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:35.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:35.098 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:35.098 Found net devices under 0000:31:00.0: cvl_0_0 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:35.098 Found net devices under 0000:31:00.1: cvl_0_1 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.098 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:31:35.099 00:31:35.099 --- 10.0.0.2 ping statistics --- 00:31:35.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.099 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:31:35.099 00:31:35.099 --- 10.0.0.1 ping statistics --- 00:31:35.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.099 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3748192 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3748192 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3748192 ']' 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.099 11:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.099 [2024-12-09 11:46:26.650583] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.099 [2024-12-09 11:46:26.651727] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:31:35.099 [2024-12-09 11:46:26.651777] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.099 [2024-12-09 11:46:26.754001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.099 [2024-12-09 11:46:26.806468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.099 [2024-12-09 11:46:26.806524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.099 [2024-12-09 11:46:26.806533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.099 [2024-12-09 11:46:26.806540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.100 [2024-12-09 11:46:26.806547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.100 [2024-12-09 11:46:26.808604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.100 [2024-12-09 11:46:26.808767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.100 [2024-12-09 11:46:26.808933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:35.100 [2024-12-09 11:46:26.808933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.100 [2024-12-09 11:46:26.886659] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.100 [2024-12-09 11:46:26.887347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.100 [2024-12-09 11:46:26.888052] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:35.100 [2024-12-09 11:46:26.888233] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:35.100 [2024-12-09 11:46:26.888402] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.361 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.361 [2024-12-09 11:46:27.501934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.623 Malloc0 00:31:35.623 [2024-12-09 11:46:27.590213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3748326 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3748326 /var/tmp/bdevperf.sock 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3748326 ']' 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.623 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.624 { 00:31:35.624 "params": { 00:31:35.624 "name": "Nvme$subsystem", 00:31:35.624 "trtype": "$TEST_TRANSPORT", 00:31:35.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.624 "adrfam": "ipv4", 00:31:35.624 "trsvcid": "$NVMF_PORT", 00:31:35.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.624 "hdgst": ${hdgst:-false}, 00:31:35.624 "ddgst": ${ddgst:-false} 00:31:35.624 }, 00:31:35.624 "method": "bdev_nvme_attach_controller" 00:31:35.624 } 00:31:35.624 EOF 00:31:35.624 )") 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:35.624 11:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:35.624 "params": { 00:31:35.624 "name": "Nvme0", 00:31:35.624 "trtype": "tcp", 00:31:35.624 "traddr": "10.0.0.2", 00:31:35.624 "adrfam": "ipv4", 00:31:35.624 "trsvcid": "4420", 00:31:35.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.624 "hdgst": false, 00:31:35.624 "ddgst": false 00:31:35.624 }, 00:31:35.624 "method": "bdev_nvme_attach_controller" 00:31:35.624 }' 00:31:35.624 [2024-12-09 11:46:27.702377] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:31:35.624 [2024-12-09 11:46:27.702430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748326 ] 00:31:35.624 [2024-12-09 11:46:27.774663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.885 [2024-12-09 11:46:27.811229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.145 Running I/O for 10 seconds... 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=755 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 755 -ge 100 ']' 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.408 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.408 [2024-12-09 11:46:28.561596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.561705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22290e0 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.563370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.408 [2024-12-09 11:46:28.563408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.563419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.408 [2024-12-09 11:46:28.563427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.563440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.408 [2024-12-09 11:46:28.563448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.563456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.408 [2024-12-09 11:46:28.563464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.563471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180eb10 is same with the state(6) to be set 00:31:36.408 [2024-12-09 11:46:28.564017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.408 [2024-12-09 11:46:28.564175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.408 [2024-12-09 11:46:28.564184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.409 [2024-12-09 11:46:28.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.409 [2024-12-09 11:46:28.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.564988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.564995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.565112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.410 [2024-12-09 11:46:28.565120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.410 [2024-12-09 11:46:28.566375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:36.410 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.410 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:36.671 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.671 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.671 task offset: 110336 on job bdev=Nvme0n1 fails 00:31:36.671 00:31:36.671 Latency(us) 00:31:36.671 [2024-12-09T10:46:28.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.671 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:36.671 Job: Nvme0n1 ended in about 0.49 seconds with error 00:31:36.671 Verification LBA range: start 0x0 length 0x400 00:31:36.671 Nvme0n1 : 0.49 1707.91 106.74 131.22 0.00 33835.81 1563.31 34515.63 00:31:36.671 [2024-12-09T10:46:28.833Z] =================================================================================================================== 00:31:36.671 [2024-12-09T10:46:28.833Z] Total : 1707.91 106.74 131.22 0.00 33835.81 1563.31 34515.63 00:31:36.671 [2024-12-09 11:46:28.568375] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:36.671 [2024-12-09 11:46:28.568396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180eb10 (9): Bad file descriptor 00:31:36.671 [2024-12-09 11:46:28.569615] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:36.671 [2024-12-09 11:46:28.569693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:36.671 [2024-12-09 11:46:28.569713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.671 [2024-12-09 11:46:28.569728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:36.671 [2024-12-09 11:46:28.569736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:36.671 [2024-12-09 11:46:28.569743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.671 [2024-12-09 11:46:28.569751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x180eb10 00:31:36.671 [2024-12-09 11:46:28.569775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180eb10 (9): Bad file descriptor 00:31:36.671 [2024-12-09 11:46:28.569788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:36.671 [2024-12-09 11:46:28.569795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:36.671 [2024-12-09 11:46:28.569805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:36.671 [2024-12-09 11:46:28.569813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:36.671 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.671 11:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3748326 00:31:37.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3748326) - No such process 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.612 { 00:31:37.612 "params": { 00:31:37.612 "name": "Nvme$subsystem", 00:31:37.612 "trtype": "$TEST_TRANSPORT", 00:31:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.612 "adrfam": "ipv4", 00:31:37.612 "trsvcid": "$NVMF_PORT", 00:31:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.612 "hdgst": ${hdgst:-false}, 00:31:37.612 "ddgst": ${ddgst:-false} 00:31:37.612 }, 00:31:37.612 "method": "bdev_nvme_attach_controller" 00:31:37.612 } 00:31:37.612 EOF 00:31:37.612 )") 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:37.612 11:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.612 "params": { 00:31:37.612 "name": "Nvme0", 00:31:37.612 "trtype": "tcp", 00:31:37.612 "traddr": "10.0.0.2", 00:31:37.612 "adrfam": "ipv4", 00:31:37.612 "trsvcid": "4420", 00:31:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.612 "hdgst": false, 00:31:37.612 "ddgst": false 00:31:37.612 }, 00:31:37.612 "method": "bdev_nvme_attach_controller" 00:31:37.612 }' 00:31:37.612 [2024-12-09 11:46:29.637565] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:31:37.612 [2024-12-09 11:46:29.637621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748693 ] 00:31:37.612 [2024-12-09 11:46:29.708833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.612 [2024-12-09 11:46:29.744174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.873 Running I/O for 1 seconds... 00:31:38.814 1408.00 IOPS, 88.00 MiB/s 00:31:38.814 Latency(us) 00:31:38.814 [2024-12-09T10:46:30.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.814 Verification LBA range: start 0x0 length 0x400 00:31:38.814 Nvme0n1 : 1.02 1442.57 90.16 0.00 0.00 43633.49 10977.28 35826.35 00:31:38.814 [2024-12-09T10:46:30.976Z] =================================================================================================================== 00:31:38.814 [2024-12-09T10:46:30.976Z] Total : 1442.57 90.16 0.00 0.00 43633.49 10977.28 35826.35 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.074 rmmod nvme_tcp 00:31:39.074 rmmod nvme_fabrics 00:31:39.074 rmmod nvme_keyring 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3748192 ']' 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3748192 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3748192 ']' 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3748192 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748192 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748192' 00:31:39.074 killing process with pid 3748192 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3748192 00:31:39.074 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3748192 00:31:39.334 [2024-12-09 11:46:31.303138] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.334 11:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.245 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:41.506 00:31:41.506 real 0m14.584s 00:31:41.506 user 0m18.792s 00:31:41.506 sys 0m7.481s 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:41.506 ************************************ 00:31:41.506 END TEST nvmf_host_management 00:31:41.506 ************************************ 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:41.506 ************************************ 00:31:41.506 START TEST nvmf_lvol 00:31:41.506 ************************************ 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:41.506 * Looking for test storage... 00:31:41.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:41.506 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.767 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.768 --rc genhtml_branch_coverage=1 00:31:41.768 --rc genhtml_function_coverage=1 00:31:41.768 --rc genhtml_legend=1 00:31:41.768 --rc geninfo_all_blocks=1 00:31:41.768 --rc geninfo_unexecuted_blocks=1 00:31:41.768 00:31:41.768 ' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.768 --rc genhtml_branch_coverage=1 00:31:41.768 --rc genhtml_function_coverage=1 00:31:41.768 --rc genhtml_legend=1 00:31:41.768 --rc geninfo_all_blocks=1 00:31:41.768 --rc geninfo_unexecuted_blocks=1 00:31:41.768 00:31:41.768 ' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.768 --rc genhtml_branch_coverage=1 00:31:41.768 --rc genhtml_function_coverage=1 00:31:41.768 --rc genhtml_legend=1 00:31:41.768 --rc geninfo_all_blocks=1 00:31:41.768 --rc geninfo_unexecuted_blocks=1 00:31:41.768 00:31:41.768 ' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.768 --rc genhtml_branch_coverage=1 00:31:41.768 --rc genhtml_function_coverage=1 00:31:41.768 --rc genhtml_legend=1 00:31:41.768 --rc geninfo_all_blocks=1 00:31:41.768 --rc geninfo_unexecuted_blocks=1 00:31:41.768 00:31:41.768 ' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.768 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.906 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.906 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.906 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.906 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.906 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:49.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:49.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:49.907 Found net devices under 0000:31:00.0: cvl_0_0 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:49.907 Found net devices under 0000:31:00.1: cvl_0_1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.907 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:31:49.907 00:31:49.907 --- 10.0.0.2 ping statistics --- 00:31:49.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.907 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:31:49.907 00:31:49.907 --- 10.0.0.1 ping statistics --- 00:31:49.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.907 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:49.907 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3753398 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3753398 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3753398 ']' 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.908 11:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.908 [2024-12-09 11:46:41.239682] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.908 [2024-12-09 11:46:41.240723] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:31:49.908 [2024-12-09 11:46:41.240763] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.908 [2024-12-09 11:46:41.321415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.908 [2024-12-09 11:46:41.358130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.908 [2024-12-09 11:46:41.358167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.908 [2024-12-09 11:46:41.358175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.908 [2024-12-09 11:46:41.358181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.908 [2024-12-09 11:46:41.358187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.908 [2024-12-09 11:46:41.359740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.908 [2024-12-09 11:46:41.359854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.908 [2024-12-09 11:46:41.359857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.908 [2024-12-09 11:46:41.415713] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.908 [2024-12-09 11:46:41.416337] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.908 [2024-12-09 11:46:41.416583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.908 [2024-12-09 11:46:41.416826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.908 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.908 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:49.908 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.908 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.908 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.169 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.169 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:50.169 [2024-12-09 11:46:42.252411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.169 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.430 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:50.430 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.690 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:50.690 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:50.950 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:50.950 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=964b1ab5-46c2-4613-b99a-05017866ff74 00:31:50.950 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 964b1ab5-46c2-4613-b99a-05017866ff74 lvol 20 00:31:51.211 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8aa73615-7eb9-42fe-b723-533f36b85c7e 00:31:51.211 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:51.470 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8aa73615-7eb9-42fe-b723-533f36b85c7e 00:31:51.471 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:51.731 [2024-12-09 11:46:43.712530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.731 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.992 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:51.992 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3753779 00:31:51.992 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:52.934 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8aa73615-7eb9-42fe-b723-533f36b85c7e MY_SNAPSHOT 00:31:53.194 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c321bbe1-a29b-4f6f-8d6e-c3d6eb49a8b4 00:31:53.194 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8aa73615-7eb9-42fe-b723-533f36b85c7e 30 00:31:53.454 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c321bbe1-a29b-4f6f-8d6e-c3d6eb49a8b4 MY_CLONE 00:31:53.454 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7f76ef71-a284-4450-8901-64ef0c2b0d98 00:31:53.454 11:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7f76ef71-a284-4450-8901-64ef0c2b0d98 00:31:54.025 11:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3753779 00:32:02.158 Initializing NVMe Controllers 00:32:02.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:02.158 Controller IO queue size 128, less than required. 00:32:02.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:02.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:02.158 Initialization complete. Launching workers. 00:32:02.158 ======================================================== 00:32:02.158 Latency(us) 00:32:02.158 Device Information : IOPS MiB/s Average min max 00:32:02.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12348.20 48.24 10368.75 1557.33 40338.37 00:32:02.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15221.00 59.46 8410.86 773.45 94942.82 00:32:02.158 ======================================================== 00:32:02.158 Total : 27569.20 107.69 9287.80 773.45 94942.82 00:32:02.158 00:32:02.158 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:02.418 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8aa73615-7eb9-42fe-b723-533f36b85c7e 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 964b1ab5-46c2-4613-b99a-05017866ff74 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.678 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.678 rmmod nvme_tcp 00:32:02.678 rmmod nvme_fabrics 00:32:02.939 rmmod nvme_keyring 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3753398 ']' 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3753398 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3753398 ']' 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3753398 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3753398 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3753398' 00:32:02.939 killing process with pid 3753398 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3753398 00:32:02.939 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3753398 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.939 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:03.200 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.200 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.200 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.200 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.200 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.114 00:32:05.114 real 0m23.689s 00:32:05.114 user 0m55.530s 00:32:05.114 sys 0m10.605s 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:05.114 ************************************ 00:32:05.114 END TEST nvmf_lvol 00:32:05.114 ************************************ 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.114 ************************************ 00:32:05.114 START TEST nvmf_lvs_grow 00:32:05.114 ************************************ 00:32:05.114 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.375 * Looking for test storage... 00:32:05.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.375 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.376 --rc genhtml_branch_coverage=1 00:32:05.376 --rc genhtml_function_coverage=1 00:32:05.376 --rc genhtml_legend=1 00:32:05.376 --rc geninfo_all_blocks=1 00:32:05.376 --rc geninfo_unexecuted_blocks=1 00:32:05.376 00:32:05.376 ' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.376 --rc genhtml_branch_coverage=1 00:32:05.376 --rc genhtml_function_coverage=1 00:32:05.376 --rc genhtml_legend=1 00:32:05.376 --rc geninfo_all_blocks=1 00:32:05.376 --rc geninfo_unexecuted_blocks=1 00:32:05.376 00:32:05.376 ' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.376 --rc genhtml_branch_coverage=1 00:32:05.376 --rc genhtml_function_coverage=1 00:32:05.376 --rc genhtml_legend=1 00:32:05.376 --rc geninfo_all_blocks=1 00:32:05.376 --rc geninfo_unexecuted_blocks=1 00:32:05.376 00:32:05.376 ' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.376 --rc genhtml_branch_coverage=1 00:32:05.376 --rc genhtml_function_coverage=1 00:32:05.376 --rc genhtml_legend=1 00:32:05.376 --rc geninfo_all_blocks=1 00:32:05.376 --rc geninfo_unexecuted_blocks=1 00:32:05.376 00:32:05.376 ' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.376 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.377 11:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:13.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:13.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:13.521 Found net devices under 0000:31:00.0: cvl_0_0 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:13.521 Found net devices under 0000:31:00.1: cvl_0_1 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.521 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:32:13.522 00:32:13.522 --- 10.0.0.2 ping statistics --- 00:32:13.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.522 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:32:13.522 00:32:13.522 --- 10.0.0.1 ping statistics --- 00:32:13.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.522 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3760175 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3760175 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3760175 ']' 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.522 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.522 [2024-12-09 11:47:04.981983] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.522 [2024-12-09 11:47:04.982974] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:13.522 [2024-12-09 11:47:04.983015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.522 [2024-12-09 11:47:05.061198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.522 [2024-12-09 11:47:05.096122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.522 [2024-12-09 11:47:05.096155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.522 [2024-12-09 11:47:05.096164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.522 [2024-12-09 11:47:05.096171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.522 [2024-12-09 11:47:05.096177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.522 [2024-12-09 11:47:05.096719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.522 [2024-12-09 11:47:05.152500] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.522 [2024-12-09 11:47:05.152740] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.783 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:14.044 [2024-12-09 11:47:06.073221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.044 ************************************ 00:32:14.044 START TEST lvs_grow_clean 00:32:14.044 ************************************ 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.044 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:14.305 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:14.305 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:14.566 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:14.566 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:14.566 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eff387b5-bddf-42bc-ad0c-fa78f841296b lvol 150 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e2023f29-7229-4285-b7ef-7b152f9a4347 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.826 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:15.086 [2024-12-09 11:47:07.093220] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:15.086 [2024-12-09 11:47:07.093393] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:15.086 true 00:32:15.086 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:15.086 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:15.347 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:15.347 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:15.347 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2023f29-7229-4285-b7ef-7b152f9a4347 00:32:15.608 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.870 [2024-12-09 11:47:07.777835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3760881 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3760881 /var/tmp/bdevperf.sock 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3760881 ']' 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:15.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:15.870 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:15.870 [2024-12-09 11:47:08.009606] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:15.870 [2024-12-09 11:47:08.009678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760881 ] 00:32:16.131 [2024-12-09 11:47:08.103512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.131 [2024-12-09 11:47:08.155489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.703 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.703 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:16.703 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:16.963 Nvme0n1 00:32:16.963 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:17.225 [ 00:32:17.225 { 00:32:17.225 "name": "Nvme0n1", 00:32:17.225 "aliases": [ 00:32:17.225 "e2023f29-7229-4285-b7ef-7b152f9a4347" 00:32:17.225 ], 00:32:17.225 "product_name": "NVMe disk", 00:32:17.225 "block_size": 4096, 00:32:17.225 "num_blocks": 38912, 00:32:17.225 "uuid": "e2023f29-7229-4285-b7ef-7b152f9a4347", 00:32:17.225 "numa_id": 0, 00:32:17.225 "assigned_rate_limits": { 00:32:17.225 "rw_ios_per_sec": 0, 00:32:17.225 "rw_mbytes_per_sec": 0, 00:32:17.225 "r_mbytes_per_sec": 0, 00:32:17.225 "w_mbytes_per_sec": 0 00:32:17.225 }, 00:32:17.225 "claimed": false, 00:32:17.225 "zoned": false, 00:32:17.225 "supported_io_types": { 00:32:17.225 "read": true, 00:32:17.225 "write": true, 00:32:17.225 "unmap": true, 00:32:17.225 "flush": true, 00:32:17.225 "reset": true, 00:32:17.225 "nvme_admin": true, 00:32:17.225 "nvme_io": true, 00:32:17.225 "nvme_io_md": false, 00:32:17.225 "write_zeroes": true, 00:32:17.225 "zcopy": false, 00:32:17.225 "get_zone_info": false, 00:32:17.225 "zone_management": false, 00:32:17.225 "zone_append": false, 00:32:17.225 "compare": true, 00:32:17.225 "compare_and_write": true, 00:32:17.225 "abort": true, 00:32:17.225 "seek_hole": false, 00:32:17.225 "seek_data": false, 00:32:17.225 "copy": true, 00:32:17.225 "nvme_iov_md": false 00:32:17.225 }, 00:32:17.225 "memory_domains": [ 00:32:17.225 { 00:32:17.225 "dma_device_id": "system", 00:32:17.225 "dma_device_type": 1 00:32:17.225 } 00:32:17.225 ], 00:32:17.225 "driver_specific": { 00:32:17.225 "nvme": [ 00:32:17.225 { 00:32:17.225 "trid": { 00:32:17.225 "trtype": "TCP", 00:32:17.225 "adrfam": "IPv4", 00:32:17.225 "traddr": "10.0.0.2", 00:32:17.225 "trsvcid": "4420", 00:32:17.225 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:17.225 }, 00:32:17.225 "ctrlr_data": { 00:32:17.225 "cntlid": 1, 00:32:17.225 "vendor_id": "0x8086", 00:32:17.225 "model_number": "SPDK bdev Controller", 00:32:17.225 "serial_number": "SPDK0", 00:32:17.225 "firmware_revision": "25.01", 00:32:17.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.225 "oacs": { 00:32:17.225 "security": 0, 00:32:17.225 "format": 0, 00:32:17.225 "firmware": 0, 00:32:17.225 "ns_manage": 0 00:32:17.225 }, 00:32:17.225 "multi_ctrlr": true, 00:32:17.225 "ana_reporting": false 00:32:17.225 }, 00:32:17.225 "vs": { 00:32:17.225 "nvme_version": "1.3" 00:32:17.225 }, 00:32:17.225 "ns_data": { 00:32:17.225 "id": 1, 00:32:17.225 "can_share": true 00:32:17.225 } 00:32:17.225 } 00:32:17.225 ], 00:32:17.225 "mp_policy": "active_passive" 00:32:17.225 } 00:32:17.225 } 00:32:17.225 ] 00:32:17.225 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3761014 00:32:17.225 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:17.225 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:17.486 Running I/O for 10 seconds... 00:32:18.428 Latency(us) 00:32:18.428 [2024-12-09T10:47:10.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.428 Nvme0n1 : 1.00 17725.00 69.24 0.00 0.00 0.00 0.00 0.00 00:32:18.428 [2024-12-09T10:47:10.590Z] =================================================================================================================== 00:32:18.428 [2024-12-09T10:47:10.590Z] Total : 17725.00 69.24 0.00 0.00 0.00 0.00 0.00 00:32:18.428 00:32:19.370 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:19.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.370 Nvme0n1 : 2.00 17879.50 69.84 0.00 0.00 0.00 0.00 0.00 00:32:19.370 [2024-12-09T10:47:11.532Z] =================================================================================================================== 00:32:19.370 [2024-12-09T10:47:11.532Z] Total : 17879.50 69.84 0.00 0.00 0.00 0.00 0.00 00:32:19.370 00:32:19.370 true 00:32:19.370 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:19.370 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:19.631 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:19.631 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:19.631 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3761014 00:32:20.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.574 Nvme0n1 : 3.00 17931.00 70.04 0.00 0.00 0.00 0.00 0.00 00:32:20.574 [2024-12-09T10:47:12.736Z] =================================================================================================================== 00:32:20.574 [2024-12-09T10:47:12.736Z] Total : 17931.00 70.04 0.00 0.00 0.00 0.00 0.00 00:32:20.574 00:32:21.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.517 Nvme0n1 : 4.00 17891.50 69.89 0.00 0.00 0.00 0.00 0.00 00:32:21.517 [2024-12-09T10:47:13.679Z] =================================================================================================================== 00:32:21.517 [2024-12-09T10:47:13.679Z] Total : 17891.50 69.89 0.00 0.00 0.00 0.00 0.00 00:32:21.517 00:32:22.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.459 Nvme0n1 : 5.00 17862.00 69.77 0.00 0.00 0.00 0.00 0.00 00:32:22.459 [2024-12-09T10:47:14.621Z] =================================================================================================================== 00:32:22.459 [2024-12-09T10:47:14.621Z] Total : 17862.00 69.77 0.00 0.00 0.00 0.00 0.00 00:32:22.459 00:32:23.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.400 Nvme0n1 : 6.00 17845.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:23.400 [2024-12-09T10:47:15.562Z] =================================================================================================================== 00:32:23.400 [2024-12-09T10:47:15.562Z] Total : 17845.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:23.400 00:32:24.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.343 Nvme0n1 : 7.00 17835.14 69.67 0.00 0.00 0.00 0.00 0.00 00:32:24.343 [2024-12-09T10:47:16.505Z] =================================================================================================================== 00:32:24.343 [2024-12-09T10:47:16.505Z] Total : 17835.14 69.67 0.00 0.00 0.00 0.00 0.00 00:32:24.343 00:32:25.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.285 Nvme0n1 : 8.00 17829.75 69.65 0.00 0.00 0.00 0.00 0.00 00:32:25.285 [2024-12-09T10:47:17.447Z] =================================================================================================================== 00:32:25.285 [2024-12-09T10:47:17.447Z] Total : 17829.75 69.65 0.00 0.00 0.00 0.00 0.00 00:32:25.285 00:32:26.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.668 Nvme0n1 : 9.00 17825.56 69.63 0.00 0.00 0.00 0.00 0.00 00:32:26.668 [2024-12-09T10:47:18.830Z] =================================================================================================================== 00:32:26.668 [2024-12-09T10:47:18.830Z] Total : 17825.56 69.63 0.00 0.00 0.00 0.00 0.00 00:32:26.668 00:32:27.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.610 Nvme0n1 : 10.00 17823.80 69.62 0.00 0.00 0.00 0.00 0.00 00:32:27.610 [2024-12-09T10:47:19.772Z] =================================================================================================================== 00:32:27.610 [2024-12-09T10:47:19.772Z] Total : 17823.80 69.62 0.00 0.00 0.00 0.00 0.00 00:32:27.610 00:32:27.610 00:32:27.610 Latency(us) 00:32:27.610 [2024-12-09T10:47:19.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.610 Nvme0n1 : 10.01 17823.04 69.62 0.00 0.00 7175.82 2566.83 13489.49 00:32:27.610 [2024-12-09T10:47:19.773Z] =================================================================================================================== 00:32:27.611 [2024-12-09T10:47:19.773Z] Total : 17823.04 69.62 0.00 0.00 7175.82 2566.83 13489.49 00:32:27.611 { 00:32:27.611 "results": [ 00:32:27.611 { 00:32:27.611 "job": "Nvme0n1", 00:32:27.611 "core_mask": "0x2", 00:32:27.611 "workload": "randwrite", 00:32:27.611 "status": "finished", 00:32:27.611 "queue_depth": 128, 00:32:27.611 "io_size": 4096, 00:32:27.611 "runtime": 10.006711, 00:32:27.611 "iops": 17823.038958554913, 00:32:27.611 "mibps": 69.62124593185513, 00:32:27.611 "io_failed": 0, 00:32:27.611 "io_timeout": 0, 00:32:27.611 "avg_latency_us": 7175.824274815437, 00:32:27.611 "min_latency_us": 2566.826666666667, 00:32:27.611 "max_latency_us": 13489.493333333334 00:32:27.611 } 00:32:27.611 ], 00:32:27.611 "core_count": 1 00:32:27.611 } 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3760881 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3760881 ']' 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3760881 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3760881 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3760881' 00:32:27.611 killing process with pid 3760881 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3760881 00:32:27.611 Received shutdown signal, test time was about 10.000000 seconds 00:32:27.611 00:32:27.611 Latency(us) 00:32:27.611 [2024-12-09T10:47:19.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.611 [2024-12-09T10:47:19.773Z] =================================================================================================================== 00:32:27.611 [2024-12-09T10:47:19.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3760881 00:32:27.611 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:27.871 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.871 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:27.871 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:28.133 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:28.133 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:28.133 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:28.395 [2024-12-09 11:47:20.333182] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:28.395 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:28.657 request: 00:32:28.657 { 00:32:28.657 "uuid": "eff387b5-bddf-42bc-ad0c-fa78f841296b", 00:32:28.657 "method": "bdev_lvol_get_lvstores", 00:32:28.657 "req_id": 1 00:32:28.657 } 00:32:28.657 Got JSON-RPC error response 00:32:28.657 response: 00:32:28.657 { 00:32:28.657 "code": -19, 00:32:28.657 "message": "No such device" 00:32:28.657 } 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:28.657 aio_bdev 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e2023f29-7229-4285-b7ef-7b152f9a4347 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e2023f29-7229-4285-b7ef-7b152f9a4347 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:28.657 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:28.919 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e2023f29-7229-4285-b7ef-7b152f9a4347 -t 2000 00:32:28.919 [ 00:32:28.919 { 00:32:28.919 "name": "e2023f29-7229-4285-b7ef-7b152f9a4347", 00:32:28.919 "aliases": [ 00:32:28.919 "lvs/lvol" 00:32:28.919 ], 00:32:28.919 "product_name": "Logical Volume", 00:32:28.919 "block_size": 4096, 00:32:28.919 "num_blocks": 38912, 00:32:28.919 "uuid": "e2023f29-7229-4285-b7ef-7b152f9a4347", 00:32:28.919 "assigned_rate_limits": { 00:32:28.919 "rw_ios_per_sec": 0, 00:32:28.919 "rw_mbytes_per_sec": 0, 00:32:28.919 "r_mbytes_per_sec": 0, 00:32:28.919 "w_mbytes_per_sec": 0 00:32:28.919 }, 00:32:28.919 "claimed": false, 00:32:28.919 "zoned": false, 00:32:28.919 "supported_io_types": { 00:32:28.919 "read": true, 00:32:28.919 "write": true, 00:32:28.919 "unmap": true, 00:32:28.919 "flush": false, 00:32:28.919 "reset": true, 00:32:28.919 "nvme_admin": false, 00:32:28.919 "nvme_io": false, 00:32:28.919 "nvme_io_md": false, 00:32:28.919 "write_zeroes": true, 00:32:28.919 "zcopy": false, 00:32:28.919 "get_zone_info": false, 00:32:28.919 "zone_management": false, 00:32:28.919 "zone_append": false, 00:32:28.919 "compare": false, 00:32:28.919 "compare_and_write": false, 00:32:28.919 "abort": false, 00:32:28.919 "seek_hole": true, 00:32:28.919 "seek_data": true, 00:32:28.919 "copy": false, 00:32:28.919 "nvme_iov_md": false 00:32:28.919 }, 00:32:28.919 "driver_specific": { 00:32:28.919 "lvol": { 00:32:28.919 "lvol_store_uuid": "eff387b5-bddf-42bc-ad0c-fa78f841296b", 00:32:28.919 "base_bdev": "aio_bdev", 00:32:28.919 "thin_provision": false, 00:32:28.919 "num_allocated_clusters": 38, 00:32:28.919 "snapshot": false, 00:32:28.919 "clone": false, 00:32:28.919 "esnap_clone": false 00:32:28.919 } 00:32:28.919 } 00:32:28.919 } 00:32:28.919 ] 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:29.180 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:29.441 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:29.441 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2023f29-7229-4285-b7ef-7b152f9a4347 00:32:29.703 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eff387b5-bddf-42bc-ad0c-fa78f841296b 00:32:29.703 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:29.963 00:32:29.963 real 0m15.904s 00:32:29.963 user 0m15.488s 00:32:29.963 sys 0m1.458s 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:29.963 ************************************ 00:32:29.963 END TEST lvs_grow_clean 00:32:29.963 ************************************ 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.963 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:30.224 ************************************ 00:32:30.224 START TEST lvs_grow_dirty 00:32:30.224 ************************************ 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:30.224 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:30.225 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.225 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.225 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:30.225 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:30.225 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:30.486 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:30.486 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:30.486 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea lvol 150 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.748 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:31.009 [2024-12-09 11:47:23.001191] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:31.009 [2024-12-09 11:47:23.001333] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:31.009 true 00:32:31.009 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:31.009 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:31.271 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:31.271 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:31.271 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:31.531 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.531 [2024-12-09 11:47:23.669765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.532 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3763866 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3763866 /var/tmp/bdevperf.sock 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3763866 ']' 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.796 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:31.796 [2024-12-09 11:47:23.912568] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:31.796 [2024-12-09 11:47:23.912637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763866 ] 00:32:32.056 [2024-12-09 11:47:24.000812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.056 [2024-12-09 11:47:24.031667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.629 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.629 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:32.629 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:32.890 Nvme0n1 00:32:32.890 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:32.890 [ 00:32:32.890 { 00:32:32.890 "name": "Nvme0n1", 00:32:32.890 "aliases": [ 00:32:32.890 "56c2693d-cf17-42a5-b6c6-31be35aab24d" 00:32:32.890 ], 00:32:32.890 "product_name": "NVMe disk", 00:32:32.890 "block_size": 4096, 00:32:32.890 "num_blocks": 38912, 00:32:32.890 "uuid": "56c2693d-cf17-42a5-b6c6-31be35aab24d", 00:32:32.890 "numa_id": 0, 00:32:32.890 "assigned_rate_limits": { 00:32:32.890 "rw_ios_per_sec": 0, 00:32:32.890 "rw_mbytes_per_sec": 0, 00:32:32.890 "r_mbytes_per_sec": 0, 00:32:32.890 "w_mbytes_per_sec": 0 00:32:32.890 }, 00:32:32.890 "claimed": false, 00:32:32.890 "zoned": false, 00:32:32.890 "supported_io_types": { 00:32:32.890 "read": true, 00:32:32.890 "write": true, 00:32:32.890 "unmap": true, 00:32:32.890 "flush": true, 00:32:32.890 "reset": true, 00:32:32.890 "nvme_admin": true, 00:32:32.890 "nvme_io": true, 00:32:32.890 "nvme_io_md": false, 00:32:32.890 "write_zeroes": true, 00:32:32.890 "zcopy": false, 00:32:32.890 "get_zone_info": false, 00:32:32.890 "zone_management": false, 00:32:32.890 "zone_append": false, 00:32:32.890 "compare": true, 00:32:32.890 "compare_and_write": true, 00:32:32.890 "abort": true, 00:32:32.890 "seek_hole": false, 00:32:32.890 "seek_data": false, 00:32:32.890 "copy": true, 00:32:32.890 "nvme_iov_md": false 00:32:32.890 }, 00:32:32.890 "memory_domains": [ 00:32:32.890 { 00:32:32.890 "dma_device_id": "system", 00:32:32.890 "dma_device_type": 1 00:32:32.890 } 00:32:32.890 ], 00:32:32.890 "driver_specific": { 00:32:32.890 "nvme": [ 00:32:32.890 { 00:32:32.890 "trid": { 00:32:32.890 "trtype": "TCP", 00:32:32.890 "adrfam": "IPv4", 00:32:32.890 "traddr": "10.0.0.2", 00:32:32.890 "trsvcid": "4420", 00:32:32.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:32.890 }, 00:32:32.890 "ctrlr_data": { 00:32:32.890 "cntlid": 1, 00:32:32.890 "vendor_id": "0x8086", 00:32:32.890 "model_number": "SPDK bdev Controller", 00:32:32.890 "serial_number": "SPDK0", 00:32:32.890 "firmware_revision": "25.01", 00:32:32.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.890 "oacs": { 00:32:32.890 "security": 0, 00:32:32.890 "format": 0, 00:32:32.890 "firmware": 0, 00:32:32.890 "ns_manage": 0 00:32:32.890 }, 00:32:32.890 "multi_ctrlr": true, 00:32:32.890 "ana_reporting": false 00:32:32.890 }, 00:32:32.890 "vs": { 00:32:32.890 "nvme_version": "1.3" 00:32:32.890 }, 00:32:32.890 "ns_data": { 00:32:32.890 "id": 1, 00:32:32.890 "can_share": true 00:32:32.890 } 00:32:32.890 } 00:32:32.890 ], 00:32:32.891 "mp_policy": "active_passive" 00:32:32.891 } 00:32:32.891 } 00:32:32.891 ] 00:32:32.891 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3763978 00:32:32.891 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:32.891 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:33.150 Running I/O for 10 seconds... 00:32:34.090 Latency(us) 00:32:34.090 [2024-12-09T10:47:26.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.090 Nvme0n1 : 1.00 17141.00 66.96 0.00 0.00 0.00 0.00 0.00 00:32:34.090 [2024-12-09T10:47:26.252Z] =================================================================================================================== 00:32:34.090 [2024-12-09T10:47:26.252Z] Total : 17141.00 66.96 0.00 0.00 0.00 0.00 0.00 00:32:34.090 00:32:35.033 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:35.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.033 Nvme0n1 : 2.00 17210.50 67.23 0.00 0.00 0.00 0.00 0.00 00:32:35.033 [2024-12-09T10:47:27.195Z] =================================================================================================================== 00:32:35.033 [2024-12-09T10:47:27.195Z] Total : 17210.50 67.23 0.00 0.00 0.00 0.00 0.00 00:32:35.033 00:32:35.294 true 00:32:35.294 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:35.294 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:35.294 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:35.294 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:35.294 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3763978 00:32:36.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.237 Nvme0n1 : 3.00 17239.00 67.34 0.00 0.00 0.00 0.00 0.00 00:32:36.237 [2024-12-09T10:47:28.399Z] =================================================================================================================== 00:32:36.237 [2024-12-09T10:47:28.400Z] Total : 17239.00 67.34 0.00 0.00 0.00 0.00 0.00 00:32:36.238 00:32:37.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.180 Nvme0n1 : 4.00 17265.25 67.44 0.00 0.00 0.00 0.00 0.00 00:32:37.180 [2024-12-09T10:47:29.342Z] =================================================================================================================== 00:32:37.180 [2024-12-09T10:47:29.342Z] Total : 17265.25 67.44 0.00 0.00 0.00 0.00 0.00 00:32:37.180 00:32:38.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.122 Nvme0n1 : 5.00 17287.40 67.53 0.00 0.00 0.00 0.00 0.00 00:32:38.122 [2024-12-09T10:47:30.284Z] =================================================================================================================== 00:32:38.122 [2024-12-09T10:47:30.284Z] Total : 17287.40 67.53 0.00 0.00 0.00 0.00 0.00 00:32:38.122 00:32:39.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.065 Nvme0n1 : 6.00 17304.83 67.60 0.00 0.00 0.00 0.00 0.00 00:32:39.065 [2024-12-09T10:47:31.227Z] =================================================================================================================== 00:32:39.065 [2024-12-09T10:47:31.227Z] Total : 17304.83 67.60 0.00 0.00 0.00 0.00 0.00 00:32:39.065 00:32:40.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.008 Nvme0n1 : 7.00 17324.14 67.67 0.00 0.00 0.00 0.00 0.00 00:32:40.008 [2024-12-09T10:47:32.170Z] =================================================================================================================== 00:32:40.008 [2024-12-09T10:47:32.170Z] Total : 17324.14 67.67 0.00 0.00 0.00 0.00 0.00 00:32:40.008 00:32:41.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.393 Nvme0n1 : 8.00 17338.62 67.73 0.00 0.00 0.00 0.00 0.00 00:32:41.393 [2024-12-09T10:47:33.555Z] =================================================================================================================== 00:32:41.393 [2024-12-09T10:47:33.555Z] Total : 17338.62 67.73 0.00 0.00 0.00 0.00 0.00 00:32:41.393 00:32:42.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.336 Nvme0n1 : 9.00 17353.44 67.79 0.00 0.00 0.00 0.00 0.00 00:32:42.336 [2024-12-09T10:47:34.498Z] =================================================================================================================== 00:32:42.336 [2024-12-09T10:47:34.498Z] Total : 17353.44 67.79 0.00 0.00 0.00 0.00 0.00 00:32:42.336 00:32:43.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.280 Nvme0n1 : 10.00 17363.70 67.83 0.00 0.00 0.00 0.00 0.00 00:32:43.280 [2024-12-09T10:47:35.442Z] =================================================================================================================== 00:32:43.280 [2024-12-09T10:47:35.442Z] Total : 17363.70 67.83 0.00 0.00 0.00 0.00 0.00 00:32:43.280 00:32:43.280 00:32:43.280 Latency(us) 00:32:43.280 [2024-12-09T10:47:35.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.280 Nvme0n1 : 10.01 17364.57 67.83 0.00 0.00 7367.76 1590.61 9229.65 00:32:43.280 [2024-12-09T10:47:35.443Z] =================================================================================================================== 00:32:43.281 [2024-12-09T10:47:35.443Z] Total : 17364.57 67.83 0.00 0.00 7367.76 1590.61 9229.65 00:32:43.281 { 00:32:43.281 "results": [ 00:32:43.281 { 00:32:43.281 "job": "Nvme0n1", 00:32:43.281 "core_mask": "0x2", 00:32:43.281 "workload": "randwrite", 00:32:43.281 "status": "finished", 00:32:43.281 "queue_depth": 128, 00:32:43.281 "io_size": 4096, 00:32:43.281 "runtime": 10.006872, 00:32:43.281 "iops": 17364.567069509834, 00:32:43.281 "mibps": 67.83034011527279, 00:32:43.281 "io_failed": 0, 00:32:43.281 "io_timeout": 0, 00:32:43.281 "avg_latency_us": 7367.764397816974, 00:32:43.281 "min_latency_us": 1590.6133333333332, 00:32:43.281 "max_latency_us": 9229.653333333334 00:32:43.281 } 00:32:43.281 ], 00:32:43.281 "core_count": 1 00:32:43.281 } 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3763866 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3763866 ']' 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3763866 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763866 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763866' 00:32:43.281 killing process with pid 3763866 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3763866 00:32:43.281 Received shutdown signal, test time was about 10.000000 seconds 00:32:43.281 00:32:43.281 Latency(us) 00:32:43.281 [2024-12-09T10:47:35.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.281 [2024-12-09T10:47:35.443Z] =================================================================================================================== 00:32:43.281 [2024-12-09T10:47:35.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3763866 00:32:43.281 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.542 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3760175 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3760175 00:32:43.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3760175 Killed "${NVMF_APP[@]}" "$@" 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3766120 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3766120 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3766120 ']' 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:43.803 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:44.065 [2024-12-09 11:47:35.998707] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.065 [2024-12-09 11:47:35.999700] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:44.065 [2024-12-09 11:47:35.999742] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.065 [2024-12-09 11:47:36.077838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.065 [2024-12-09 11:47:36.112258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.065 [2024-12-09 11:47:36.112291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.065 [2024-12-09 11:47:36.112299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.065 [2024-12-09 11:47:36.112305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.065 [2024-12-09 11:47:36.112312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.065 [2024-12-09 11:47:36.112858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.065 [2024-12-09 11:47:36.168440] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:44.065 [2024-12-09 11:47:36.168676] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.636 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.636 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:44.636 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.636 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.636 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:44.897 [2024-12-09 11:47:36.968148] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:44.897 [2024-12-09 11:47:36.968252] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:44.897 [2024-12-09 11:47:36.968282] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:44.897 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:45.158 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56c2693d-cf17-42a5-b6c6-31be35aab24d -t 2000 00:32:45.158 [ 00:32:45.158 { 00:32:45.158 "name": "56c2693d-cf17-42a5-b6c6-31be35aab24d", 00:32:45.158 "aliases": [ 00:32:45.158 "lvs/lvol" 00:32:45.158 ], 00:32:45.158 "product_name": "Logical Volume", 00:32:45.158 "block_size": 4096, 00:32:45.158 "num_blocks": 38912, 00:32:45.158 "uuid": "56c2693d-cf17-42a5-b6c6-31be35aab24d", 00:32:45.158 "assigned_rate_limits": { 00:32:45.158 "rw_ios_per_sec": 0, 00:32:45.158 "rw_mbytes_per_sec": 0, 00:32:45.158 "r_mbytes_per_sec": 0, 00:32:45.158 "w_mbytes_per_sec": 0 00:32:45.158 }, 00:32:45.158 "claimed": false, 00:32:45.158 "zoned": false, 00:32:45.158 "supported_io_types": { 00:32:45.158 "read": true, 00:32:45.158 "write": true, 00:32:45.158 "unmap": true, 00:32:45.158 "flush": false, 00:32:45.158 "reset": true, 00:32:45.158 "nvme_admin": false, 00:32:45.158 "nvme_io": false, 00:32:45.158 "nvme_io_md": false, 00:32:45.158 "write_zeroes": true, 00:32:45.158 "zcopy": false, 00:32:45.158 "get_zone_info": false, 00:32:45.158 "zone_management": false, 00:32:45.158 "zone_append": false, 00:32:45.158 "compare": false, 00:32:45.158 "compare_and_write": false, 00:32:45.158 "abort": false, 00:32:45.158 "seek_hole": true, 00:32:45.158 "seek_data": true, 00:32:45.158 "copy": false, 00:32:45.158 "nvme_iov_md": false 00:32:45.158 }, 00:32:45.158 "driver_specific": { 00:32:45.158 "lvol": { 00:32:45.158 "lvol_store_uuid": "b63ef755-a80c-4f5c-8e8d-b9401e4992ea", 00:32:45.158 "base_bdev": "aio_bdev", 00:32:45.158 "thin_provision": false, 00:32:45.158 "num_allocated_clusters": 38, 00:32:45.158 "snapshot": false, 00:32:45.158 "clone": false, 00:32:45.158 "esnap_clone": false 00:32:45.158 } 00:32:45.158 } 00:32:45.158 } 00:32:45.158 ] 00:32:45.158 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:45.158 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:45.158 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:45.418 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:45.418 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:45.418 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:45.679 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:45.679 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:45.940 [2024-12-09 11:47:37.849275] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.940 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:45.941 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:45.941 request: 00:32:45.941 { 00:32:45.941 "uuid": "b63ef755-a80c-4f5c-8e8d-b9401e4992ea", 00:32:45.941 "method": "bdev_lvol_get_lvstores", 00:32:45.941 "req_id": 1 00:32:45.941 } 00:32:45.941 Got JSON-RPC error response 00:32:45.941 response: 00:32:45.941 { 00:32:45.941 "code": -19, 00:32:45.941 "message": "No such device" 00:32:45.941 } 00:32:45.941 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:45.941 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:45.941 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:45.941 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:45.941 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.200 aio_bdev 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:46.200 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.460 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56c2693d-cf17-42a5-b6c6-31be35aab24d -t 2000 00:32:46.460 [ 00:32:46.460 { 00:32:46.460 "name": "56c2693d-cf17-42a5-b6c6-31be35aab24d", 00:32:46.460 "aliases": [ 00:32:46.460 "lvs/lvol" 00:32:46.460 ], 00:32:46.460 "product_name": "Logical Volume", 00:32:46.460 "block_size": 4096, 00:32:46.460 "num_blocks": 38912, 00:32:46.460 "uuid": "56c2693d-cf17-42a5-b6c6-31be35aab24d", 00:32:46.460 "assigned_rate_limits": { 00:32:46.460 "rw_ios_per_sec": 0, 00:32:46.460 "rw_mbytes_per_sec": 0, 00:32:46.460 "r_mbytes_per_sec": 0, 00:32:46.460 "w_mbytes_per_sec": 0 00:32:46.460 }, 00:32:46.460 "claimed": false, 00:32:46.460 "zoned": false, 00:32:46.460 "supported_io_types": { 00:32:46.460 "read": true, 00:32:46.460 "write": true, 00:32:46.460 "unmap": true, 00:32:46.460 "flush": false, 00:32:46.460 "reset": true, 00:32:46.460 "nvme_admin": false, 00:32:46.460 "nvme_io": false, 00:32:46.460 "nvme_io_md": false, 00:32:46.460 "write_zeroes": true, 00:32:46.460 "zcopy": false, 00:32:46.460 "get_zone_info": false, 00:32:46.460 "zone_management": false, 00:32:46.460 "zone_append": false, 00:32:46.460 "compare": false, 00:32:46.460 "compare_and_write": false, 00:32:46.460 "abort": false, 00:32:46.460 "seek_hole": true, 00:32:46.460 "seek_data": true, 00:32:46.460 "copy": false, 00:32:46.460 "nvme_iov_md": false 00:32:46.460 }, 00:32:46.460 "driver_specific": { 00:32:46.460 "lvol": { 00:32:46.460 "lvol_store_uuid": "b63ef755-a80c-4f5c-8e8d-b9401e4992ea", 00:32:46.460 "base_bdev": "aio_bdev", 00:32:46.460 "thin_provision": false, 00:32:46.460 "num_allocated_clusters": 38, 00:32:46.460 "snapshot": false, 00:32:46.460 "clone": false, 00:32:46.460 "esnap_clone": false 00:32:46.460 } 00:32:46.460 } 00:32:46.460 } 00:32:46.460 ] 00:32:46.460 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:46.460 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:46.460 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:46.720 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:46.720 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:46.720 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:46.980 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:46.980 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56c2693d-cf17-42a5-b6c6-31be35aab24d 00:32:46.980 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b63ef755-a80c-4f5c-8e8d-b9401e4992ea 00:32:47.240 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.499 00:32:47.499 real 0m17.335s 00:32:47.499 user 0m34.640s 00:32:47.499 sys 0m3.478s 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:47.499 ************************************ 00:32:47.499 END TEST lvs_grow_dirty 00:32:47.499 ************************************ 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:47.499 nvmf_trace.0 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:47.499 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.500 rmmod nvme_tcp 00:32:47.500 rmmod nvme_fabrics 00:32:47.500 rmmod nvme_keyring 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3766120 ']' 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3766120 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3766120 ']' 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3766120 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.500 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3766120 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3766120' 00:32:47.759 killing process with pid 3766120 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3766120 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3766120 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.759 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.301 00:32:50.301 real 0m44.666s 00:32:50.301 user 0m53.222s 00:32:50.301 sys 0m10.946s 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:50.301 ************************************ 00:32:50.301 END TEST nvmf_lvs_grow 00:32:50.301 ************************************ 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.301 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.301 ************************************ 00:32:50.301 START TEST nvmf_bdev_io_wait 00:32:50.301 ************************************ 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:50.301 * Looking for test storage... 00:32:50.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.301 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.301 --rc genhtml_branch_coverage=1 00:32:50.301 --rc genhtml_function_coverage=1 00:32:50.301 --rc genhtml_legend=1 00:32:50.301 --rc geninfo_all_blocks=1 00:32:50.301 --rc geninfo_unexecuted_blocks=1 00:32:50.301 00:32:50.302 ' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.302 --rc genhtml_branch_coverage=1 00:32:50.302 --rc genhtml_function_coverage=1 00:32:50.302 --rc genhtml_legend=1 00:32:50.302 --rc geninfo_all_blocks=1 00:32:50.302 --rc geninfo_unexecuted_blocks=1 00:32:50.302 00:32:50.302 ' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.302 --rc genhtml_branch_coverage=1 00:32:50.302 --rc genhtml_function_coverage=1 00:32:50.302 --rc genhtml_legend=1 00:32:50.302 --rc geninfo_all_blocks=1 00:32:50.302 --rc geninfo_unexecuted_blocks=1 00:32:50.302 00:32:50.302 ' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.302 --rc genhtml_branch_coverage=1 00:32:50.302 --rc genhtml_function_coverage=1 00:32:50.302 --rc genhtml_legend=1 00:32:50.302 --rc geninfo_all_blocks=1 00:32:50.302 --rc geninfo_unexecuted_blocks=1 00:32:50.302 00:32:50.302 ' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.302 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:58.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:58.443 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:58.443 Found net devices under 0000:31:00.0: cvl_0_0 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:58.443 Found net devices under 0000:31:00.1: cvl_0_1 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.443 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:32:58.444 00:32:58.444 --- 10.0.0.2 ping statistics --- 00:32:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.444 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:32:58.444 00:32:58.444 --- 10.0.0.1 ping statistics --- 00:32:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.444 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3771110 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3771110 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3771110 ']' 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 [2024-12-09 11:47:49.469560] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.444 [2024-12-09 11:47:49.470334] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:58.444 [2024-12-09 11:47:49.470364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.444 [2024-12-09 11:47:49.540112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.444 [2024-12-09 11:47:49.577174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.444 [2024-12-09 11:47:49.577205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.444 [2024-12-09 11:47:49.577214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.444 [2024-12-09 11:47:49.577220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.444 [2024-12-09 11:47:49.577226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.444 [2024-12-09 11:47:49.578723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.444 [2024-12-09 11:47:49.578843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.444 [2024-12-09 11:47:49.579002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.444 [2024-12-09 11:47:49.579003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.444 [2024-12-09 11:47:49.579290] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 [2024-12-09 11:47:49.725201] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:58.444 [2024-12-09 11:47:49.725689] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.444 [2024-12-09 11:47:49.726192] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:58.444 [2024-12-09 11:47:49.726420] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.444 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.444 [2024-12-09 11:47:49.735465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.445 Malloc0 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.445 [2024-12-09 11:47:49.799640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3771135 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3771137 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.445 { 00:32:58.445 "params": { 00:32:58.445 "name": "Nvme$subsystem", 00:32:58.445 "trtype": "$TEST_TRANSPORT", 00:32:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.445 "adrfam": "ipv4", 00:32:58.445 "trsvcid": "$NVMF_PORT", 00:32:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.445 "hdgst": ${hdgst:-false}, 00:32:58.445 "ddgst": ${ddgst:-false} 00:32:58.445 }, 00:32:58.445 "method": "bdev_nvme_attach_controller" 00:32:58.445 } 00:32:58.445 EOF 00:32:58.445 )") 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3771139 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3771142 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.445 { 00:32:58.445 "params": { 00:32:58.445 "name": "Nvme$subsystem", 00:32:58.445 "trtype": "$TEST_TRANSPORT", 00:32:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.445 "adrfam": "ipv4", 00:32:58.445 "trsvcid": "$NVMF_PORT", 00:32:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.445 "hdgst": ${hdgst:-false}, 00:32:58.445 "ddgst": ${ddgst:-false} 00:32:58.445 }, 00:32:58.445 "method": "bdev_nvme_attach_controller" 00:32:58.445 } 00:32:58.445 EOF 00:32:58.445 )") 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.445 { 00:32:58.445 "params": { 00:32:58.445 "name": "Nvme$subsystem", 00:32:58.445 "trtype": "$TEST_TRANSPORT", 00:32:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.445 "adrfam": "ipv4", 00:32:58.445 "trsvcid": "$NVMF_PORT", 00:32:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.445 "hdgst": ${hdgst:-false}, 00:32:58.445 "ddgst": ${ddgst:-false} 00:32:58.445 }, 00:32:58.445 "method": "bdev_nvme_attach_controller" 00:32:58.445 } 00:32:58.445 EOF 00:32:58.445 )") 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.445 { 00:32:58.445 "params": { 00:32:58.445 "name": "Nvme$subsystem", 00:32:58.445 "trtype": "$TEST_TRANSPORT", 00:32:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.445 "adrfam": "ipv4", 00:32:58.445 "trsvcid": "$NVMF_PORT", 00:32:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.445 "hdgst": ${hdgst:-false}, 00:32:58.445 "ddgst": ${ddgst:-false} 00:32:58.445 }, 00:32:58.445 "method": "bdev_nvme_attach_controller" 00:32:58.445 } 00:32:58.445 EOF 00:32:58.445 )") 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3771135 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.445 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.445 "params": { 00:32:58.445 "name": "Nvme1", 00:32:58.445 "trtype": "tcp", 00:32:58.445 "traddr": "10.0.0.2", 00:32:58.446 "adrfam": "ipv4", 00:32:58.446 "trsvcid": "4420", 00:32:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.446 "hdgst": false, 00:32:58.446 "ddgst": false 00:32:58.446 }, 00:32:58.446 "method": "bdev_nvme_attach_controller" 00:32:58.446 }' 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.446 "params": { 00:32:58.446 "name": "Nvme1", 00:32:58.446 "trtype": "tcp", 00:32:58.446 "traddr": "10.0.0.2", 00:32:58.446 "adrfam": "ipv4", 00:32:58.446 "trsvcid": "4420", 00:32:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.446 "hdgst": false, 00:32:58.446 "ddgst": false 00:32:58.446 }, 00:32:58.446 "method": "bdev_nvme_attach_controller" 00:32:58.446 }' 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.446 "params": { 00:32:58.446 "name": "Nvme1", 00:32:58.446 "trtype": "tcp", 00:32:58.446 "traddr": "10.0.0.2", 00:32:58.446 "adrfam": "ipv4", 00:32:58.446 "trsvcid": "4420", 00:32:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.446 "hdgst": false, 00:32:58.446 "ddgst": false 00:32:58.446 }, 00:32:58.446 "method": "bdev_nvme_attach_controller" 00:32:58.446 }' 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.446 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.446 "params": { 00:32:58.446 "name": "Nvme1", 00:32:58.446 "trtype": "tcp", 00:32:58.446 "traddr": "10.0.0.2", 00:32:58.446 "adrfam": "ipv4", 00:32:58.446 "trsvcid": "4420", 00:32:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.446 "hdgst": false, 00:32:58.446 "ddgst": false 00:32:58.446 }, 00:32:58.446 "method": "bdev_nvme_attach_controller" 00:32:58.446 }' 00:32:58.446 [2024-12-09 11:47:49.856884] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:58.446 [2024-12-09 11:47:49.856935] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:58.446 [2024-12-09 11:47:49.857116] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:58.446 [2024-12-09 11:47:49.857164] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:58.446 [2024-12-09 11:47:49.857897] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:58.446 [2024-12-09 11:47:49.857944] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:58.446 [2024-12-09 11:47:49.860537] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:32:58.446 [2024-12-09 11:47:49.860583] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:58.446 [2024-12-09 11:47:50.021163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.446 [2024-12-09 11:47:50.052033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:58.446 [2024-12-09 11:47:50.066748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.446 [2024-12-09 11:47:50.096526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:58.446 [2024-12-09 11:47:50.115400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.446 [2024-12-09 11:47:50.144252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:58.446 [2024-12-09 11:47:50.173243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.446 [2024-12-09 11:47:50.201923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:58.446 Running I/O for 1 seconds... 00:32:58.446 Running I/O for 1 seconds... 00:32:58.446 Running I/O for 1 seconds... 00:32:58.446 Running I/O for 1 seconds... 00:32:59.388 168832.00 IOPS, 659.50 MiB/s 00:32:59.388 Latency(us) 00:32:59.388 [2024-12-09T10:47:51.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.388 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:59.388 Nvme1n1 : 1.00 168469.87 658.09 0.00 0.00 755.44 324.27 2102.61 00:32:59.388 [2024-12-09T10:47:51.550Z] =================================================================================================================== 00:32:59.388 [2024-12-09T10:47:51.550Z] Total : 168469.87 658.09 0.00 0.00 755.44 324.27 2102.61 00:32:59.388 8246.00 IOPS, 32.21 MiB/s 00:32:59.388 Latency(us) 00:32:59.388 [2024-12-09T10:47:51.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.388 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:59.388 Nvme1n1 : 1.02 8265.98 32.29 0.00 0.00 15379.90 2170.88 20862.29 00:32:59.388 [2024-12-09T10:47:51.550Z] =================================================================================================================== 00:32:59.388 [2024-12-09T10:47:51.550Z] Total : 8265.98 32.29 0.00 0.00 15379.90 2170.88 20862.29 00:32:59.388 13414.00 IOPS, 52.40 MiB/s 00:32:59.388 Latency(us) 00:32:59.388 [2024-12-09T10:47:51.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.388 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:59.388 Nvme1n1 : 1.01 13472.37 52.63 0.00 0.00 9467.76 2143.57 14964.05 00:32:59.388 [2024-12-09T10:47:51.550Z] =================================================================================================================== 00:32:59.388 [2024-12-09T10:47:51.550Z] Total : 13472.37 52.63 0.00 0.00 9467.76 2143.57 14964.05 00:32:59.388 7780.00 IOPS, 30.39 MiB/s 00:32:59.388 Latency(us) 00:32:59.388 [2024-12-09T10:47:51.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.388 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:59.388 Nvme1n1 : 1.01 7872.98 30.75 0.00 0.00 16215.72 3877.55 29709.65 00:32:59.388 [2024-12-09T10:47:51.550Z] =================================================================================================================== 00:32:59.388 [2024-12-09T10:47:51.550Z] Total : 7872.98 30.75 0.00 0.00 16215.72 3877.55 29709.65 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3771137 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3771139 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3771142 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.388 rmmod nvme_tcp 00:32:59.388 rmmod nvme_fabrics 00:32:59.388 rmmod nvme_keyring 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3771110 ']' 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3771110 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3771110 ']' 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3771110 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.388 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3771110 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3771110' 00:32:59.648 killing process with pid 3771110 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3771110 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3771110 00:32:59.648 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.649 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.189 00:33:02.189 real 0m11.787s 00:33:02.189 user 0m14.279s 00:33:02.189 sys 0m7.116s 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 ************************************ 00:33:02.189 END TEST nvmf_bdev_io_wait 00:33:02.189 ************************************ 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 ************************************ 00:33:02.189 START TEST nvmf_queue_depth 00:33:02.189 ************************************ 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:02.189 * Looking for test storage... 00:33:02.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:02.189 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:02.189 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:02.189 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.190 --rc genhtml_branch_coverage=1 00:33:02.190 --rc genhtml_function_coverage=1 00:33:02.190 --rc genhtml_legend=1 00:33:02.190 --rc geninfo_all_blocks=1 00:33:02.190 --rc geninfo_unexecuted_blocks=1 00:33:02.190 00:33:02.190 ' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.190 --rc genhtml_branch_coverage=1 00:33:02.190 --rc genhtml_function_coverage=1 00:33:02.190 --rc genhtml_legend=1 00:33:02.190 --rc geninfo_all_blocks=1 00:33:02.190 --rc geninfo_unexecuted_blocks=1 00:33:02.190 00:33:02.190 ' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.190 --rc genhtml_branch_coverage=1 00:33:02.190 --rc genhtml_function_coverage=1 00:33:02.190 --rc genhtml_legend=1 00:33:02.190 --rc geninfo_all_blocks=1 00:33:02.190 --rc geninfo_unexecuted_blocks=1 00:33:02.190 00:33:02.190 ' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.190 --rc genhtml_branch_coverage=1 00:33:02.190 --rc genhtml_function_coverage=1 00:33:02.190 --rc genhtml_legend=1 00:33:02.190 --rc geninfo_all_blocks=1 00:33:02.190 --rc geninfo_unexecuted_blocks=1 00:33:02.190 00:33:02.190 ' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:02.190 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.191 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:10.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:10.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.323 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:10.324 Found net devices under 0000:31:00.0: cvl_0_0 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:10.324 Found net devices under 0000:31:00.1: cvl_0_1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:33:10.324 00:33:10.324 --- 10.0.0.2 ping statistics --- 00:33:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.324 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:33:10.324 00:33:10.324 --- 10.0.0.1 ping statistics --- 00:33:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.324 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3775970 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3775970 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3775970 ']' 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.324 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.324 [2024-12-09 11:48:01.569935] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.324 [2024-12-09 11:48:01.570925] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:33:10.324 [2024-12-09 11:48:01.570962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.324 [2024-12-09 11:48:01.670632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.324 [2024-12-09 11:48:01.709724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.324 [2024-12-09 11:48:01.709765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.324 [2024-12-09 11:48:01.709774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.324 [2024-12-09 11:48:01.709781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.324 [2024-12-09 11:48:01.709787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.324 [2024-12-09 11:48:01.710451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.324 [2024-12-09 11:48:01.777962] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.324 [2024-12-09 11:48:01.778235] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.324 [2024-12-09 11:48:02.423298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.324 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.325 Malloc0 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.325 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 [2024-12-09 11:48:02.507472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3776043 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3776043 /var/tmp/bdevperf.sock 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3776043 ']' 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.585 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 [2024-12-09 11:48:02.566859] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:33:10.585 [2024-12-09 11:48:02.566928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776043 ] 00:33:10.585 [2024-12-09 11:48:02.639943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.585 [2024-12-09 11:48:02.676690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:11.525 NVMe0n1 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.525 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.525 Running I/O for 10 seconds... 00:33:13.846 8194.00 IOPS, 32.01 MiB/s [2024-12-09T10:48:06.947Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-09T10:48:07.888Z] 8840.33 IOPS, 34.53 MiB/s [2024-12-09T10:48:08.830Z] 9534.00 IOPS, 37.24 MiB/s [2024-12-09T10:48:09.770Z] 10057.40 IOPS, 39.29 MiB/s [2024-12-09T10:48:10.712Z] 10411.00 IOPS, 40.67 MiB/s [2024-12-09T10:48:11.652Z] 10664.29 IOPS, 41.66 MiB/s [2024-12-09T10:48:13.033Z] 10829.50 IOPS, 42.30 MiB/s [2024-12-09T10:48:13.974Z] 10978.56 IOPS, 42.88 MiB/s [2024-12-09T10:48:13.974Z] 11085.40 IOPS, 43.30 MiB/s 00:33:21.812 Latency(us) 00:33:21.812 [2024-12-09T10:48:13.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.812 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:21.813 Verification LBA range: start 0x0 length 0x4000 00:33:21.813 NVMe0n1 : 10.05 11127.60 43.47 0.00 0.00 91699.82 8574.29 77769.39 00:33:21.813 [2024-12-09T10:48:13.975Z] =================================================================================================================== 00:33:21.813 [2024-12-09T10:48:13.975Z] Total : 11127.60 43.47 0.00 0.00 91699.82 8574.29 77769.39 00:33:21.813 { 00:33:21.813 "results": [ 00:33:21.813 { 00:33:21.813 "job": "NVMe0n1", 00:33:21.813 "core_mask": "0x1", 00:33:21.813 "workload": "verify", 00:33:21.813 "status": "finished", 00:33:21.813 "verify_range": { 00:33:21.813 "start": 0, 00:33:21.813 "length": 16384 00:33:21.813 }, 00:33:21.813 "queue_depth": 1024, 00:33:21.813 "io_size": 4096, 00:33:21.813 "runtime": 10.047091, 00:33:21.813 "iops": 11127.599023438725, 00:33:21.813 "mibps": 43.46718368530752, 00:33:21.813 "io_failed": 0, 00:33:21.813 "io_timeout": 0, 00:33:21.813 "avg_latency_us": 91699.82233798449, 00:33:21.813 "min_latency_us": 8574.293333333333, 00:33:21.813 "max_latency_us": 77769.38666666667 00:33:21.813 } 00:33:21.813 ], 00:33:21.813 "core_count": 1 00:33:21.813 } 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3776043 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3776043 ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3776043 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3776043 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3776043' 00:33:21.813 killing process with pid 3776043 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3776043 00:33:21.813 Received shutdown signal, test time was about 10.000000 seconds 00:33:21.813 00:33:21.813 Latency(us) 00:33:21.813 [2024-12-09T10:48:13.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.813 [2024-12-09T10:48:13.975Z] =================================================================================================================== 00:33:21.813 [2024-12-09T10:48:13.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3776043 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.813 rmmod nvme_tcp 00:33:21.813 rmmod nvme_fabrics 00:33:21.813 rmmod nvme_keyring 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3775970 ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3775970 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3775970 ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3775970 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.813 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775970 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775970' 00:33:22.074 killing process with pid 3775970 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3775970 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3775970 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.074 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.621 00:33:24.621 real 0m22.322s 00:33:24.621 user 0m24.581s 00:33:24.621 sys 0m7.298s 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.621 ************************************ 00:33:24.621 END TEST nvmf_queue_depth 00:33:24.621 ************************************ 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:24.621 ************************************ 00:33:24.621 START TEST nvmf_target_multipath 00:33:24.621 ************************************ 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:24.621 * Looking for test storage... 00:33:24.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.621 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.622 --rc genhtml_branch_coverage=1 00:33:24.622 --rc genhtml_function_coverage=1 00:33:24.622 --rc genhtml_legend=1 00:33:24.622 --rc geninfo_all_blocks=1 00:33:24.622 --rc geninfo_unexecuted_blocks=1 00:33:24.622 00:33:24.622 ' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.622 --rc genhtml_branch_coverage=1 00:33:24.622 --rc genhtml_function_coverage=1 00:33:24.622 --rc genhtml_legend=1 00:33:24.622 --rc geninfo_all_blocks=1 00:33:24.622 --rc geninfo_unexecuted_blocks=1 00:33:24.622 00:33:24.622 ' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.622 --rc genhtml_branch_coverage=1 00:33:24.622 --rc genhtml_function_coverage=1 00:33:24.622 --rc genhtml_legend=1 00:33:24.622 --rc geninfo_all_blocks=1 00:33:24.622 --rc geninfo_unexecuted_blocks=1 00:33:24.622 00:33:24.622 ' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.622 --rc genhtml_branch_coverage=1 00:33:24.622 --rc genhtml_function_coverage=1 00:33:24.622 --rc genhtml_legend=1 00:33:24.622 --rc geninfo_all_blocks=1 00:33:24.622 --rc geninfo_unexecuted_blocks=1 00:33:24.622 00:33:24.622 ' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.622 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.623 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:32.767 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:32.767 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:32.767 Found net devices under 0000:31:00.0: cvl_0_0 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:32.767 Found net devices under 0000:31:00.1: cvl_0_1 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:32.767 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:32.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:33:32.768 00:33:32.768 --- 10.0.0.2 ping statistics --- 00:33:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.768 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:33:32.768 00:33:32.768 --- 10.0.0.1 ping statistics --- 00:33:32.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.768 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:32.768 only one NIC for nvmf test 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.768 rmmod nvme_tcp 00:33:32.768 rmmod nvme_fabrics 00:33:32.768 rmmod nvme_keyring 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.768 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.768 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.768 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.768 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.768 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.222 00:33:34.222 real 0m9.848s 00:33:34.222 user 0m2.145s 00:33:34.222 sys 0m5.627s 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:34.222 ************************************ 00:33:34.222 END TEST nvmf_target_multipath 00:33:34.222 ************************************ 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:34.222 ************************************ 00:33:34.222 START TEST nvmf_zcopy 00:33:34.222 ************************************ 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:34.222 * Looking for test storage... 00:33:34.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.222 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:34.223 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:34.223 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.507 --rc genhtml_branch_coverage=1 00:33:34.507 --rc genhtml_function_coverage=1 00:33:34.507 --rc genhtml_legend=1 00:33:34.507 --rc geninfo_all_blocks=1 00:33:34.507 --rc geninfo_unexecuted_blocks=1 00:33:34.507 00:33:34.507 ' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.507 --rc genhtml_branch_coverage=1 00:33:34.507 --rc genhtml_function_coverage=1 00:33:34.507 --rc genhtml_legend=1 00:33:34.507 --rc geninfo_all_blocks=1 00:33:34.507 --rc geninfo_unexecuted_blocks=1 00:33:34.507 00:33:34.507 ' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.507 --rc genhtml_branch_coverage=1 00:33:34.507 --rc genhtml_function_coverage=1 00:33:34.507 --rc genhtml_legend=1 00:33:34.507 --rc geninfo_all_blocks=1 00:33:34.507 --rc geninfo_unexecuted_blocks=1 00:33:34.507 00:33:34.507 ' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:34.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.507 --rc genhtml_branch_coverage=1 00:33:34.507 --rc genhtml_function_coverage=1 00:33:34.507 --rc genhtml_legend=1 00:33:34.507 --rc geninfo_all_blocks=1 00:33:34.507 --rc geninfo_unexecuted_blocks=1 00:33:34.507 00:33:34.507 ' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.507 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.508 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:41.339 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:41.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.339 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:41.340 Found net devices under 0000:31:00.0: cvl_0_0 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:41.340 Found net devices under 0000:31:00.1: cvl_0_1 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.340 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:33:41.602 00:33:41.602 --- 10.0.0.2 ping statistics --- 00:33:41.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.602 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:33:41.602 00:33:41.602 --- 10.0.0.1 ping statistics --- 00:33:41.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.602 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:41.602 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3787267 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3787267 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3787267 ']' 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.863 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.863 [2024-12-09 11:48:33.869414] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:41.863 [2024-12-09 11:48:33.870514] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:33:41.863 [2024-12-09 11:48:33.870564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.863 [2024-12-09 11:48:33.968398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.863 [2024-12-09 11:48:34.003222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.863 [2024-12-09 11:48:34.003254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.863 [2024-12-09 11:48:34.003266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.863 [2024-12-09 11:48:34.003273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.863 [2024-12-09 11:48:34.003279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.863 [2024-12-09 11:48:34.003854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.124 [2024-12-09 11:48:34.059632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:42.124 [2024-12-09 11:48:34.059870] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 [2024-12-09 11:48:34.700597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 [2024-12-09 11:48:34.728798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 malloc0 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.697 { 00:33:42.697 "params": { 00:33:42.697 "name": "Nvme$subsystem", 00:33:42.697 "trtype": "$TEST_TRANSPORT", 00:33:42.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.697 "adrfam": "ipv4", 00:33:42.697 "trsvcid": "$NVMF_PORT", 00:33:42.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.697 "hdgst": ${hdgst:-false}, 00:33:42.697 "ddgst": ${ddgst:-false} 00:33:42.697 }, 00:33:42.697 "method": "bdev_nvme_attach_controller" 00:33:42.697 } 00:33:42.697 EOF 00:33:42.697 )") 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:42.697 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.697 "params": { 00:33:42.697 "name": "Nvme1", 00:33:42.697 "trtype": "tcp", 00:33:42.697 "traddr": "10.0.0.2", 00:33:42.697 "adrfam": "ipv4", 00:33:42.697 "trsvcid": "4420", 00:33:42.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.697 "hdgst": false, 00:33:42.697 "ddgst": false 00:33:42.697 }, 00:33:42.697 "method": "bdev_nvme_attach_controller" 00:33:42.697 }' 00:33:42.697 [2024-12-09 11:48:34.836796] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:33:42.697 [2024-12-09 11:48:34.836847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787344 ] 00:33:42.959 [2024-12-09 11:48:34.907787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.959 [2024-12-09 11:48:34.943843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.219 Running I/O for 10 seconds... 00:33:45.100 6643.00 IOPS, 51.90 MiB/s [2024-12-09T10:48:38.645Z] 6697.00 IOPS, 52.32 MiB/s [2024-12-09T10:48:39.585Z] 6707.33 IOPS, 52.40 MiB/s [2024-12-09T10:48:40.526Z] 6719.00 IOPS, 52.49 MiB/s [2024-12-09T10:48:41.469Z] 7043.20 IOPS, 55.02 MiB/s [2024-12-09T10:48:42.415Z] 7494.17 IOPS, 58.55 MiB/s [2024-12-09T10:48:43.357Z] 7816.71 IOPS, 61.07 MiB/s [2024-12-09T10:48:44.298Z] 8055.62 IOPS, 62.93 MiB/s [2024-12-09T10:48:45.680Z] 8243.33 IOPS, 64.40 MiB/s [2024-12-09T10:48:45.680Z] 8394.40 IOPS, 65.58 MiB/s 00:33:53.518 Latency(us) 00:33:53.518 [2024-12-09T10:48:45.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:53.518 Verification LBA range: start 0x0 length 0x1000 00:33:53.518 Nvme1n1 : 10.01 8397.29 65.60 0.00 0.00 15190.62 1720.32 27306.67 00:33:53.518 [2024-12-09T10:48:45.680Z] =================================================================================================================== 00:33:53.518 [2024-12-09T10:48:45.680Z] Total : 8397.29 65.60 0.00 0.00 15190.62 1720.32 27306.67 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3789347 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.518 { 00:33:53.518 "params": { 00:33:53.518 "name": "Nvme$subsystem", 00:33:53.518 "trtype": "$TEST_TRANSPORT", 00:33:53.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.518 "adrfam": "ipv4", 00:33:53.518 "trsvcid": "$NVMF_PORT", 00:33:53.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.518 "hdgst": ${hdgst:-false}, 00:33:53.518 "ddgst": ${ddgst:-false} 00:33:53.518 }, 00:33:53.518 "method": "bdev_nvme_attach_controller" 00:33:53.518 } 00:33:53.518 EOF 00:33:53.518 )") 00:33:53.518 [2024-12-09 11:48:45.400185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.518 [2024-12-09 11:48:45.400213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.518 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:53.519 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:53.519 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:53.519 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.519 "params": { 00:33:53.519 "name": "Nvme1", 00:33:53.519 "trtype": "tcp", 00:33:53.519 "traddr": "10.0.0.2", 00:33:53.519 "adrfam": "ipv4", 00:33:53.519 "trsvcid": "4420", 00:33:53.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.519 "hdgst": false, 00:33:53.519 "ddgst": false 00:33:53.519 }, 00:33:53.519 "method": "bdev_nvme_attach_controller" 00:33:53.519 }' 00:33:53.519 [2024-12-09 11:48:45.412154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.412164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.424151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.424160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.436151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.436161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.444122] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:33:53.519 [2024-12-09 11:48:45.444171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789347 ] 00:33:53.519 [2024-12-09 11:48:45.448151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.448161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.460151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.460160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.472151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.472161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.484151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.484160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.496151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.496160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.508151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.508159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.515223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.519 [2024-12-09 11:48:45.520151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.520160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.532152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.532162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.544151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.544160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.550571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.519 [2024-12-09 11:48:45.556153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.556164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.568158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.568170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.580156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.580170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.592152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.592162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.604153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.604165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.616150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.616159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.628159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.628177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.640153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.640164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.652153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.652165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.664155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.664170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.519 [2024-12-09 11:48:45.676155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.519 [2024-12-09 11:48:45.676169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.688155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.688169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 Running I/O for 5 seconds... 00:33:53.780 [2024-12-09 11:48:45.702423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.702440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.715455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.715472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.728744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.728760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.743251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.743267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.756157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.756174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.768940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.768955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.782982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.782997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.795583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.795599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.808734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.808749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.823977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.823993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.837359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.837375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.851275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.851291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.863843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.863860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.876759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.876774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.891161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.891177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.904152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.904168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.916732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.916748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.780 [2024-12-09 11:48:45.930814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.780 [2024-12-09 11:48:45.930830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:45.943756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:45.943776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:45.956608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:45.956623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:45.971420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:45.971436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:45.984387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:45.984402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:45.996892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:45.996907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.011568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.011584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.024712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.024727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.039377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.039392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.052417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.052432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.067259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.067275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.080477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.080492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.095194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.095209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.107964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.107979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.120975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.120989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.134890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.134906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.148039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.148055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.161422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.161437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.174878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.174896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.187859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.187875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.041 [2024-12-09 11:48:46.200685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.041 [2024-12-09 11:48:46.200708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.214662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.214678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.227472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.227488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.240398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.240414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.253468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.253483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.266858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.266874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.280274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.280289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.293032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.293047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.307423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.307438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.320553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.320568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.335501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.335517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.348648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.348663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.363119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.363134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.376101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.376116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.301 [2024-12-09 11:48:46.389332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.301 [2024-12-09 11:48:46.389347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.302 [2024-12-09 11:48:46.403304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.302 [2024-12-09 11:48:46.403320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.302 [2024-12-09 11:48:46.416505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.302 [2024-12-09 11:48:46.416520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.302 [2024-12-09 11:48:46.431184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.302 [2024-12-09 11:48:46.431200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.302 [2024-12-09 11:48:46.444136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.302 [2024-12-09 11:48:46.444151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.302 [2024-12-09 11:48:46.457038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.302 [2024-12-09 11:48:46.457057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.470948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.470963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.483853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.483868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.496987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.497002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.511130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.511146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.523935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.523950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.536759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.536774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.551068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.551083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.563976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.563991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.576478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.576492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.591230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.591245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.604408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.604424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.617223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.617238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.631341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.631356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.644475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.644490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.658818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.658833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.672163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.672179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.685040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.685056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 19083.00 IOPS, 149.09 MiB/s [2024-12-09T10:48:46.724Z] [2024-12-09 11:48:46.699426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.699442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.562 [2024-12-09 11:48:46.712428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.562 [2024-12-09 11:48:46.712443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.727358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.727374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.740369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.740385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.752845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.752860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.767622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.767638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.780999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.781019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.795042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.795057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.808023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.808039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.821570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.821585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.835722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.835738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.848310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.848325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.861542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.823 [2024-12-09 11:48:46.861557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.823 [2024-12-09 11:48:46.875635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.875650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.888692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.888707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.903907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.903922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.917041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.917056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.931005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.931024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.943921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.943936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.956440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.956455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.970913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.970928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.824 [2024-12-09 11:48:46.983896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.824 [2024-12-09 11:48:46.983911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:46.996967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:46.996983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.011120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.011137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.024161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.024176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.037030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.037045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.051032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.051048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.064277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.064293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.077230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.077245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.091701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.091716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.104589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.104604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.119398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.119413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.084 [2024-12-09 11:48:47.132746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.084 [2024-12-09 11:48:47.132761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.147073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.147088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.160063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.160078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.173470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.173486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.187712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.187727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.200748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.200764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.215089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.215105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.228171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.228186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.085 [2024-12-09 11:48:47.241268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.085 [2024-12-09 11:48:47.241283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.255184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.255200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.267945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.267961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.281370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.281386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.295326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.295341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.308509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.308524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.323397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.323412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.336343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.336359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.349213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.349228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.363484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.363500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.376534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.376549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.390996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.391016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.404061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.404079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.416887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.416903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.431312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.431327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.444328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.444345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.456990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.457006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.471385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.471405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.484213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.484229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.346 [2024-12-09 11:48:47.497007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.346 [2024-12-09 11:48:47.497027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.511095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.511111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.524067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.524083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.536819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.536835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.551702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.551718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.564838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.564853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.578943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.578958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.591830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.591846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.605131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.605146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.619650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.619666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.632541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.632557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.647276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.647291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.660102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.660117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.673306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.673321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.687492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.687508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 19131.00 IOPS, 149.46 MiB/s [2024-12-09T10:48:47.770Z] [2024-12-09 11:48:47.700501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.700516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.715083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.715099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.728280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.728301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.740942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.740958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.755159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.755174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.608 [2024-12-09 11:48:47.768202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.608 [2024-12-09 11:48:47.768218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.868 [2024-12-09 11:48:47.781392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.781408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.795253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.795269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.808618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.808633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.823134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.823150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.836201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.836217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.848650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.848665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.863357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.863372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.876410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.876424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.891231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.891246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.904202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.904217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.916781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.916796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.931408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.931424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.944620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.944635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.959279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.959295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.972314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.972330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.984826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.984845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:47.999284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:47.999300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:48.012754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:48.012769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.869 [2024-12-09 11:48:48.027049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.869 [2024-12-09 11:48:48.027064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.039986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.040002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.053278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.053293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.067399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.067415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.079769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.079784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.092414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.092429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.105227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.105241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.119537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.119553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.132658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.132673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.147096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.147111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.160117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.160132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.173473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.173488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.187139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.187154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.200300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.200316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.213670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.213685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.227496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.227512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.240623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.240638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.255412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.255428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.268279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.268295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.130 [2024-12-09 11:48:48.281164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.130 [2024-12-09 11:48:48.281179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.295694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.295710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.308533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.308548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.323450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.323465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.336550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.336565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.351162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.351178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.364330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.364346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.377280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.377295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.391321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.391336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.404542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.404557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.419202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.419219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.432201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.432217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.444930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.444946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.458986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.459002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.471955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.471971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.485054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.485070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.499055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.499071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.511995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.512015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.524697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.524711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.391 [2024-12-09 11:48:48.539448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.391 [2024-12-09 11:48:48.539464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.552604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.552619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.567489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.567505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.580641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.580656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.595071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.595086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.607961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.607976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.620612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.620626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.635226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.635241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.648224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.648240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.660959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.660974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.675303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.675319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.687876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.687891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.700758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.700773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 19155.33 IOPS, 149.65 MiB/s [2024-12-09T10:48:48.815Z] [2024-12-09 11:48:48.715097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.715112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.728042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.728058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.740884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.740903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.755436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.755453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.768532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.768548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.783521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.783537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.796366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.796382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.653 [2024-12-09 11:48:48.808933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.653 [2024-12-09 11:48:48.808949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.823780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.823796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.836913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.836928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.851317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.851333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.864072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.864088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.877151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.877166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.891335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.891351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.904329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.904345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.917413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.917429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.931261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.931277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.944242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.944258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.957351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.957367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.971310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.971327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.984509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.984525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:48.999597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:48.999617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:49.012722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:49.012737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:49.027224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:49.027240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:49.040068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:49.040084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:49.053440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:49.053455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.915 [2024-12-09 11:48:49.067109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.915 [2024-12-09 11:48:49.067125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.079790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.079807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.093337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.093353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.107266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.107282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.120519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.120535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.135356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.135371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.148500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.148515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.162989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.163004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.175689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.175705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.189114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.189129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.177 [2024-12-09 11:48:49.203133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.177 [2024-12-09 11:48:49.203150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.215842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.215857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.229099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.229114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.242985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.243001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.256050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.256070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.269420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.269436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.283287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.283303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.296465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.296480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.311636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.311651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.178 [2024-12-09 11:48:49.324563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.178 [2024-12-09 11:48:49.324578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.339016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.339032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.351844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.351859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.364382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.364398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.377008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.377028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.391077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.391092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.404039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.404055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.416789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.416804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.431204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.431219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.444006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.444026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.456961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.456976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.471041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.471057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.484110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.484126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.496693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.496708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.511544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.511564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.524704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.524720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.538931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.538947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.551709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.551725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.564340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.564355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.577285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.577300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.440 [2024-12-09 11:48:49.591095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.440 [2024-12-09 11:48:49.591110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.604468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.604485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.619279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.619293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.632275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.632291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.645434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.645449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.659276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.659291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.672105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.672121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.685170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.685185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.699126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.699141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 19147.50 IOPS, 149.59 MiB/s [2024-12-09T10:48:49.864Z] [2024-12-09 11:48:49.711205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.711220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.723457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.723472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.735773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.735788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.748433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.748448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.763132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.763147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.776225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.776240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.788861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.788876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.803110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.803127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.816187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.816203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.829287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.829302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.843743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.843758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.702 [2024-12-09 11:48:49.856978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.702 [2024-12-09 11:48:49.856993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.871457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.871473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.884738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.884753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.898850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.898866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.912172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.912187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.924989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.925003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.939340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.939355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.952512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.952527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.967215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.967231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.980050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.980065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:49.993115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:49.993129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.007334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.007350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.020491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.020507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.034809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.034825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.047775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.047791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.060779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.060794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.075454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.075470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.088458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.088472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.103572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.103587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.964 [2024-12-09 11:48:50.116694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.964 [2024-12-09 11:48:50.116709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.131059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.131076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.144127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.144143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.156902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.156918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.171647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.171663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.184723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.184738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.199302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.199318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.212640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.212655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.227565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.227580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.240672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.240688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.255015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.255031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.267763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.267778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.280700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.280716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.295372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.295387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.308498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.308513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.323399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.323414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.336500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.336515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.351049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.351065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.363932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.363947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.225 [2024-12-09 11:48:50.377270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.225 [2024-12-09 11:48:50.377285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.391530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.391545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.404610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.404625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.419281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.419296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.432188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.432204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.444830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.444844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.459294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.459309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.472450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.472465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.487065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.487080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.500060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.500075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.512949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.512964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.527641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.527662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.540929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.540946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.555589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.555605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.568796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.568812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.583040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.583056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.596133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.596149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.487 [2024-12-09 11:48:50.609241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.487 [2024-12-09 11:48:50.609256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.488 [2024-12-09 11:48:50.623368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.488 [2024-12-09 11:48:50.623383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.488 [2024-12-09 11:48:50.636236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.488 [2024-12-09 11:48:50.636251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.649067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.649083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.663603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.663619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.676668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.676683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.691329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.691345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.704515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.704530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 19160.40 IOPS, 149.69 MiB/s 00:33:58.749 Latency(us) 00:33:58.749 [2024-12-09T10:48:50.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.749 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:58.749 Nvme1n1 : 5.01 19160.40 149.69 0.00 0.00 6673.87 2621.44 12069.55 00:33:58.749 [2024-12-09T10:48:50.911Z] =================================================================================================================== 00:33:58.749 [2024-12-09T10:48:50.911Z] Total : 19160.40 149.69 0.00 0.00 6673.87 2621.44 12069.55 00:33:58.749 [2024-12-09 11:48:50.716157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.716172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.728156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.728169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.740159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.740178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.752157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.752171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.764154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.764164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.776154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.776164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.788151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.788161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.800156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.749 [2024-12-09 11:48:50.800168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.749 [2024-12-09 11:48:50.812153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.750 [2024-12-09 11:48:50.812164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.750 [2024-12-09 11:48:50.824152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.750 [2024-12-09 11:48:50.824161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3789347) - No such process 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3789347 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.750 delay0 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.750 11:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:59.012 [2024-12-09 11:48:50.935018] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:05.602 Initializing NVMe Controllers 00:34:05.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:05.602 Initialization complete. Launching workers. 00:34:05.602 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2934 00:34:05.602 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3221, failed to submit 33 00:34:05.602 success 3060, unsuccessful 161, failed 0 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.602 rmmod nvme_tcp 00:34:05.602 rmmod nvme_fabrics 00:34:05.602 rmmod nvme_keyring 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3787267 ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3787267 ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787267' 00:34:05.602 killing process with pid 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3787267 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.602 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.863 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.863 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.864 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.864 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.780 00:34:07.780 real 0m33.632s 00:34:07.780 user 0m43.359s 00:34:07.780 sys 0m11.725s 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:07.780 ************************************ 00:34:07.780 END TEST nvmf_zcopy 00:34:07.780 ************************************ 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.780 ************************************ 00:34:07.780 START TEST nvmf_nmic 00:34:07.780 ************************************ 00:34:07.780 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:08.042 * Looking for test storage... 00:34:08.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:08.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.042 --rc genhtml_branch_coverage=1 00:34:08.042 --rc genhtml_function_coverage=1 00:34:08.042 --rc genhtml_legend=1 00:34:08.042 --rc geninfo_all_blocks=1 00:34:08.042 --rc geninfo_unexecuted_blocks=1 00:34:08.042 00:34:08.042 ' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:08.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.042 --rc genhtml_branch_coverage=1 00:34:08.042 --rc genhtml_function_coverage=1 00:34:08.042 --rc genhtml_legend=1 00:34:08.042 --rc geninfo_all_blocks=1 00:34:08.042 --rc geninfo_unexecuted_blocks=1 00:34:08.042 00:34:08.042 ' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:08.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.042 --rc genhtml_branch_coverage=1 00:34:08.042 --rc genhtml_function_coverage=1 00:34:08.042 --rc genhtml_legend=1 00:34:08.042 --rc geninfo_all_blocks=1 00:34:08.042 --rc geninfo_unexecuted_blocks=1 00:34:08.042 00:34:08.042 ' 00:34:08.042 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:08.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.042 --rc genhtml_branch_coverage=1 00:34:08.042 --rc genhtml_function_coverage=1 00:34:08.042 --rc genhtml_legend=1 00:34:08.043 --rc geninfo_all_blocks=1 00:34:08.043 --rc geninfo_unexecuted_blocks=1 00:34:08.043 00:34:08.043 ' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.043 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.189 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:16.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:16.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:16.190 Found net devices under 0000:31:00.0: cvl_0_0 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:16.190 Found net devices under 0000:31:00.1: cvl_0_1 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.190 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:34:16.190 00:34:16.190 --- 10.0.0.2 ping statistics --- 00:34:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.190 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:34:16.190 00:34:16.190 --- 10.0.0.1 ping statistics --- 00:34:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.190 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.190 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3795969 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3795969 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3795969 ']' 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.191 11:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 [2024-12-09 11:49:07.354319] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.191 [2024-12-09 11:49:07.355453] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:34:16.191 [2024-12-09 11:49:07.355505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.191 [2024-12-09 11:49:07.439456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.191 [2024-12-09 11:49:07.482330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.191 [2024-12-09 11:49:07.482367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.191 [2024-12-09 11:49:07.482375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.191 [2024-12-09 11:49:07.482382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.191 [2024-12-09 11:49:07.482388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.191 [2024-12-09 11:49:07.483935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.191 [2024-12-09 11:49:07.484059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.191 [2024-12-09 11:49:07.484167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.191 [2024-12-09 11:49:07.484167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.191 [2024-12-09 11:49:07.541595] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.191 [2024-12-09 11:49:07.541609] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:16.191 [2024-12-09 11:49:07.542578] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:16.191 [2024-12-09 11:49:07.543294] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.191 [2024-12-09 11:49:07.543353] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 [2024-12-09 11:49:08.196904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 Malloc0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 [2024-12-09 11:49:08.272803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:16.191 test case1: single bdev can't be used in multiple subsystems 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 [2024-12-09 11:49:08.308556] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:16.191 [2024-12-09 11:49:08.308578] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:16.191 [2024-12-09 11:49:08.308586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.191 request: 00:34:16.191 { 00:34:16.191 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:16.191 "namespace": { 00:34:16.191 "bdev_name": "Malloc0", 00:34:16.191 "no_auto_visible": false, 00:34:16.191 "hide_metadata": false 00:34:16.191 }, 00:34:16.191 "method": "nvmf_subsystem_add_ns", 00:34:16.191 "req_id": 1 00:34:16.191 } 00:34:16.191 Got JSON-RPC error response 00:34:16.191 response: 00:34:16.191 { 00:34:16.191 "code": -32602, 00:34:16.191 "message": "Invalid parameters" 00:34:16.191 } 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:16.191 Adding namespace failed - expected result. 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:16.191 test case2: host connect to nvmf target in multiple paths 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.191 [2024-12-09 11:49:08.320665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:16.191 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.192 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:16.764 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:17.025 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:17.025 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:17.025 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:17.025 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:17.025 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:19.573 11:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:19.573 [global] 00:34:19.573 thread=1 00:34:19.574 invalidate=1 00:34:19.574 rw=write 00:34:19.574 time_based=1 00:34:19.574 runtime=1 00:34:19.574 ioengine=libaio 00:34:19.574 direct=1 00:34:19.574 bs=4096 00:34:19.574 iodepth=1 00:34:19.574 norandommap=0 00:34:19.574 numjobs=1 00:34:19.574 00:34:19.574 verify_dump=1 00:34:19.574 verify_backlog=512 00:34:19.574 verify_state_save=0 00:34:19.574 do_verify=1 00:34:19.574 verify=crc32c-intel 00:34:19.574 [job0] 00:34:19.574 filename=/dev/nvme0n1 00:34:19.574 Could not set queue depth (nvme0n1) 00:34:19.574 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:19.574 fio-3.35 00:34:19.574 Starting 1 thread 00:34:20.521 00:34:20.521 job0: (groupid=0, jobs=1): err= 0: pid=3796898: Mon Dec 9 11:49:12 2024 00:34:20.521 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:20.521 slat (nsec): min=6987, max=58376, avg=26258.81, stdev=3164.46 00:34:20.521 clat (usec): min=562, max=1243, avg=1002.30, stdev=93.68 00:34:20.521 lat (usec): min=588, max=1269, avg=1028.56, stdev=94.16 00:34:20.521 clat percentiles (usec): 00:34:20.521 | 1.00th=[ 725], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:34:20.521 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1037], 00:34:20.521 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:34:20.521 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:20.521 | 99.99th=[ 1237] 00:34:20.521 write: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec); 0 zone resets 00:34:20.521 slat (usec): min=9, max=30631, avg=75.80, stdev=1177.02 00:34:20.521 clat (usec): min=223, max=854, avg=611.23, stdev=97.78 00:34:20.521 lat (usec): min=235, max=31446, avg=687.04, stdev=1189.19 00:34:20.521 clat percentiles (usec): 00:34:20.521 | 1.00th=[ 367], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 529], 00:34:20.521 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 644], 00:34:20.521 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:34:20.521 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 857], 99.95th=[ 857], 00:34:20.521 | 99.99th=[ 857] 00:34:20.521 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:20.521 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:20.521 lat (usec) : 250=0.17%, 500=8.42%, 750=46.46%, 1000=19.87% 00:34:20.521 lat (msec) : 2=25.08% 00:34:20.521 cpu : usr=2.00%, sys=3.30%, ctx=1191, majf=0, minf=1 00:34:20.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.521 issued rwts: total=512,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:20.521 00:34:20.521 Run status group 0 (all jobs): 00:34:20.521 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:34:20.521 WRITE: bw=2701KiB/s (2766kB/s), 2701KiB/s-2701KiB/s (2766kB/s-2766kB/s), io=2704KiB (2769kB), run=1001-1001msec 00:34:20.521 00:34:20.521 Disk stats (read/write): 00:34:20.521 nvme0n1: ios=537/517, merge=0/0, ticks=1470/304, in_queue=1774, util=98.80% 00:34:20.521 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:20.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.783 rmmod nvme_tcp 00:34:20.783 rmmod nvme_fabrics 00:34:20.783 rmmod nvme_keyring 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3795969 ']' 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3795969 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3795969 ']' 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3795969 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.783 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795969 00:34:21.044 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.044 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.045 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795969' 00:34:21.045 killing process with pid 3795969 00:34:21.045 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3795969 00:34:21.045 11:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3795969 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.045 11:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:23.592 00:34:23.592 real 0m15.249s 00:34:23.592 user 0m35.440s 00:34:23.592 sys 0m7.234s 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:23.592 ************************************ 00:34:23.592 END TEST nvmf_nmic 00:34:23.592 ************************************ 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:23.592 ************************************ 00:34:23.592 START TEST nvmf_fio_target 00:34:23.592 ************************************ 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:23.592 * Looking for test storage... 00:34:23.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:23.592 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:23.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.593 --rc genhtml_branch_coverage=1 00:34:23.593 --rc genhtml_function_coverage=1 00:34:23.593 --rc genhtml_legend=1 00:34:23.593 --rc geninfo_all_blocks=1 00:34:23.593 --rc geninfo_unexecuted_blocks=1 00:34:23.593 00:34:23.593 ' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:23.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.593 --rc genhtml_branch_coverage=1 00:34:23.593 --rc genhtml_function_coverage=1 00:34:23.593 --rc genhtml_legend=1 00:34:23.593 --rc geninfo_all_blocks=1 00:34:23.593 --rc geninfo_unexecuted_blocks=1 00:34:23.593 00:34:23.593 ' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:23.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.593 --rc genhtml_branch_coverage=1 00:34:23.593 --rc genhtml_function_coverage=1 00:34:23.593 --rc genhtml_legend=1 00:34:23.593 --rc geninfo_all_blocks=1 00:34:23.593 --rc geninfo_unexecuted_blocks=1 00:34:23.593 00:34:23.593 ' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:23.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.593 --rc genhtml_branch_coverage=1 00:34:23.593 --rc genhtml_function_coverage=1 00:34:23.593 --rc genhtml_legend=1 00:34:23.593 --rc geninfo_all_blocks=1 00:34:23.593 --rc geninfo_unexecuted_blocks=1 00:34:23.593 00:34:23.593 ' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:23.593 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:31.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:31.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:31.735 Found net devices under 0000:31:00.0: cvl_0_0 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:31.735 Found net devices under 0000:31:00.1: cvl_0_1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.735 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:34:31.736 00:34:31.736 --- 10.0.0.2 ping statistics --- 00:34:31.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.736 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:31.736 00:34:31.736 --- 10.0.0.1 ping statistics --- 00:34:31.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.736 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3801298 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3801298 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3801298 ']' 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.736 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.736 [2024-12-09 11:49:23.053963] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:31.736 [2024-12-09 11:49:23.055124] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:34:31.736 [2024-12-09 11:49:23.055177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.736 [2024-12-09 11:49:23.140434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:31.736 [2024-12-09 11:49:23.183350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.736 [2024-12-09 11:49:23.183389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.736 [2024-12-09 11:49:23.183397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.736 [2024-12-09 11:49:23.183404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.736 [2024-12-09 11:49:23.183410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.736 [2024-12-09 11:49:23.185007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.736 [2024-12-09 11:49:23.185159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.736 [2024-12-09 11:49:23.185402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:31.736 [2024-12-09 11:49:23.185404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.736 [2024-12-09 11:49:23.243607] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:31.736 [2024-12-09 11:49:23.243617] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:31.736 [2024-12-09 11:49:23.244588] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:31.736 [2024-12-09 11:49:23.244831] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.736 [2024-12-09 11:49:23.245058] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:31.736 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.736 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:31.736 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.736 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.736 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.997 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.997 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:31.997 [2024-12-09 11:49:24.054347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.997 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.258 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:32.258 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.518 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:32.518 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.518 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:32.518 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.779 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:32.779 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:33.040 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.301 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:33.301 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.301 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:33.301 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.562 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:33.562 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:33.562 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:33.822 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:33.822 11:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.083 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:34.083 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:34.083 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.343 [2024-12-09 11:49:26.378176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.343 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:34.603 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:34.864 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:35.124 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:35.124 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:35.124 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:35.124 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:35.124 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:35.125 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:37.037 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:37.297 [global] 00:34:37.297 thread=1 00:34:37.297 invalidate=1 00:34:37.297 rw=write 00:34:37.297 time_based=1 00:34:37.297 runtime=1 00:34:37.297 ioengine=libaio 00:34:37.297 direct=1 00:34:37.297 bs=4096 00:34:37.297 iodepth=1 00:34:37.297 norandommap=0 00:34:37.297 numjobs=1 00:34:37.297 00:34:37.297 verify_dump=1 00:34:37.297 verify_backlog=512 00:34:37.297 verify_state_save=0 00:34:37.297 do_verify=1 00:34:37.297 verify=crc32c-intel 00:34:37.297 [job0] 00:34:37.297 filename=/dev/nvme0n1 00:34:37.297 [job1] 00:34:37.297 filename=/dev/nvme0n2 00:34:37.297 [job2] 00:34:37.297 filename=/dev/nvme0n3 00:34:37.297 [job3] 00:34:37.297 filename=/dev/nvme0n4 00:34:37.297 Could not set queue depth (nvme0n1) 00:34:37.297 Could not set queue depth (nvme0n2) 00:34:37.297 Could not set queue depth (nvme0n3) 00:34:37.297 Could not set queue depth (nvme0n4) 00:34:37.557 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:37.558 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:37.558 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:37.558 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:37.558 fio-3.35 00:34:37.558 Starting 4 threads 00:34:38.945 00:34:38.945 job0: (groupid=0, jobs=1): err= 0: pid=3802873: Mon Dec 9 11:49:30 2024 00:34:38.945 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:38.945 slat (nsec): min=6881, max=55483, avg=26372.35, stdev=3104.56 00:34:38.945 clat (usec): min=764, max=2591, avg=1043.84, stdev=114.02 00:34:38.945 lat (usec): min=791, max=2616, avg=1070.21, stdev=114.15 00:34:38.945 clat percentiles (usec): 00:34:38.945 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 963], 00:34:38.946 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:34:38.946 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:38.946 | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 2606], 99.95th=[ 2606], 00:34:38.946 | 99.99th=[ 2606] 00:34:38.946 write: IOPS=844, BW=3377KiB/s (3458kB/s)(3380KiB/1001msec); 0 zone resets 00:34:38.946 slat (nsec): min=9705, max=70208, avg=28854.65, stdev=10925.39 00:34:38.946 clat (usec): min=137, max=1099, avg=494.87, stdev=193.97 00:34:38.946 lat (usec): min=147, max=1134, avg=523.73, stdev=199.18 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 172], 20.00th=[ 293], 00:34:38.946 | 30.00th=[ 383], 40.00th=[ 469], 50.00th=[ 529], 60.00th=[ 586], 00:34:38.946 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 758], 00:34:38.946 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:38.946 | 99.99th=[ 1106] 00:34:38.946 bw ( KiB/s): min= 4096, max= 4096, per=41.57%, avg=4096.00, stdev= 0.00, samples=1 00:34:38.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:38.946 lat (usec) : 250=8.40%, 500=19.31%, 750=31.02%, 1000=14.22% 00:34:38.946 lat (msec) : 2=26.97%, 4=0.07% 00:34:38.946 cpu : usr=1.90%, sys=4.00%, ctx=1358, majf=0, minf=1 00:34:38.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 issued rwts: total=512,845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:38.946 job1: (groupid=0, jobs=1): err= 0: pid=3802874: Mon Dec 9 11:49:30 2024 00:34:38.946 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:38.946 slat (nsec): min=25316, max=60268, avg=26576.48, stdev=3313.55 00:34:38.946 clat (usec): min=737, max=1273, avg=1047.73, stdev=89.03 00:34:38.946 lat (usec): min=763, max=1299, avg=1074.31, stdev=88.93 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 988], 00:34:38.946 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:34:38.946 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:38.946 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:38.946 | 99.99th=[ 1270] 00:34:38.946 write: IOPS=672, BW=2689KiB/s (2754kB/s)(2692KiB/1001msec); 0 zone resets 00:34:38.946 slat (nsec): min=9711, max=66420, avg=30153.39, stdev=10173.10 00:34:38.946 clat (usec): min=202, max=1218, avg=624.19, stdev=129.61 00:34:38.946 lat (usec): min=214, max=1253, avg=654.34, stdev=133.53 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 302], 5.00th=[ 388], 10.00th=[ 445], 20.00th=[ 515], 00:34:38.946 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:34:38.946 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 824], 00:34:38.946 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 1221], 99.95th=[ 1221], 00:34:38.946 | 99.99th=[ 1221] 00:34:38.946 bw ( KiB/s): min= 4096, max= 4096, per=41.57%, avg=4096.00, stdev= 0.00, samples=1 00:34:38.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:38.946 lat (usec) : 250=0.17%, 500=10.13%, 750=38.82%, 1000=17.47% 00:34:38.946 lat (msec) : 2=33.42% 00:34:38.946 cpu : usr=1.30%, sys=4.00%, ctx=1186, majf=0, minf=1 00:34:38.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 issued rwts: total=512,673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:38.946 job2: (groupid=0, jobs=1): err= 0: pid=3802877: Mon Dec 9 11:49:30 2024 00:34:38.946 read: IOPS=18, BW=73.6KiB/s (75.4kB/s)(76.0KiB/1032msec) 00:34:38.946 slat (nsec): min=25289, max=26123, avg=25621.47, stdev=247.09 00:34:38.946 clat (usec): min=1001, max=42013, avg=39470.69, stdev=9326.11 00:34:38.946 lat (usec): min=1028, max=42038, avg=39496.32, stdev=9326.00 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[40633], 20.00th=[41157], 00:34:38.946 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:38.946 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:38.946 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:38.946 | 99.99th=[42206] 00:34:38.946 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:34:38.946 slat (nsec): min=9768, max=69422, avg=32201.55, stdev=7320.52 00:34:38.946 clat (usec): min=141, max=1306, avg=510.13, stdev=144.52 00:34:38.946 lat (usec): min=152, max=1317, avg=542.33, stdev=145.86 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 194], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 383], 00:34:38.946 | 30.00th=[ 437], 40.00th=[ 469], 50.00th=[ 502], 60.00th=[ 545], 00:34:38.946 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 742], 00:34:38.946 | 99.00th=[ 816], 99.50th=[ 979], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:38.946 | 99.99th=[ 1303] 00:34:38.946 bw ( KiB/s): min= 4096, max= 4096, per=41.57%, avg=4096.00, stdev= 0.00, samples=1 00:34:38.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:38.946 lat (usec) : 250=1.51%, 500=45.57%, 750=45.20%, 1000=3.77% 00:34:38.946 lat (msec) : 2=0.56%, 50=3.39% 00:34:38.946 cpu : usr=0.78%, sys=1.55%, ctx=531, majf=0, minf=2 00:34:38.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:38.946 job3: (groupid=0, jobs=1): err= 0: pid=3802879: Mon Dec 9 11:49:30 2024 00:34:38.946 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:34:38.946 slat (nsec): min=26362, max=29123, avg=26827.41, stdev=679.09 00:34:38.946 clat (usec): min=1072, max=42028, avg=39483.72, stdev=9900.92 00:34:38.946 lat (usec): min=1101, max=42055, avg=39510.55, stdev=9900.33 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:34:38.946 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:38.946 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:38.946 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:38.946 | 99.99th=[42206] 00:34:38.946 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:34:38.946 slat (nsec): min=9062, max=54469, avg=29428.76, stdev=9851.46 00:34:38.946 clat (usec): min=257, max=1354, avg=620.86, stdev=121.42 00:34:38.946 lat (usec): min=267, max=1389, avg=650.29, stdev=126.20 00:34:38.946 clat percentiles (usec): 00:34:38.946 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 515], 00:34:38.946 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 660], 00:34:38.946 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:34:38.946 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 1352], 99.95th=[ 1352], 00:34:38.946 | 99.99th=[ 1352] 00:34:38.946 bw ( KiB/s): min= 4096, max= 4096, per=41.57%, avg=4096.00, stdev= 0.00, samples=1 00:34:38.946 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:38.946 lat (usec) : 500=17.77%, 750=65.78%, 1000=13.04% 00:34:38.946 lat (msec) : 2=0.38%, 50=3.02% 00:34:38.946 cpu : usr=0.70%, sys=2.18%, ctx=529, majf=0, minf=2 00:34:38.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.946 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:38.946 00:34:38.946 Run status group 0 (all jobs): 00:34:38.946 READ: bw=4109KiB/s (4207kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=4240KiB (4342kB), run=1001-1032msec 00:34:38.946 WRITE: bw=9853KiB/s (10.1MB/s), 1984KiB/s-3377KiB/s (2032kB/s-3458kB/s), io=9.93MiB (10.4MB), run=1001-1032msec 00:34:38.946 00:34:38.946 Disk stats (read/write): 00:34:38.946 nvme0n1: ios=522/512, merge=0/0, ticks=1478/204, in_queue=1682, util=98.30% 00:34:38.946 nvme0n2: ios=443/512, merge=0/0, ticks=755/314, in_queue=1069, util=98.54% 00:34:38.946 nvme0n3: ios=39/512, merge=0/0, ticks=920/251, in_queue=1171, util=90.96% 00:34:38.946 nvme0n4: ios=10/512, merge=0/0, ticks=419/252, in_queue=671, util=88.68% 00:34:38.946 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:38.946 [global] 00:34:38.946 thread=1 00:34:38.946 invalidate=1 00:34:38.946 rw=randwrite 00:34:38.946 time_based=1 00:34:38.946 runtime=1 00:34:38.946 ioengine=libaio 00:34:38.946 direct=1 00:34:38.946 bs=4096 00:34:38.946 iodepth=1 00:34:38.946 norandommap=0 00:34:38.946 numjobs=1 00:34:38.946 00:34:38.946 verify_dump=1 00:34:38.946 verify_backlog=512 00:34:38.946 verify_state_save=0 00:34:38.946 do_verify=1 00:34:38.946 verify=crc32c-intel 00:34:38.946 [job0] 00:34:38.946 filename=/dev/nvme0n1 00:34:38.946 [job1] 00:34:38.946 filename=/dev/nvme0n2 00:34:38.946 [job2] 00:34:38.946 filename=/dev/nvme0n3 00:34:38.946 [job3] 00:34:38.946 filename=/dev/nvme0n4 00:34:38.946 Could not set queue depth (nvme0n1) 00:34:38.946 Could not set queue depth (nvme0n2) 00:34:38.946 Could not set queue depth (nvme0n3) 00:34:38.946 Could not set queue depth (nvme0n4) 00:34:39.208 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.208 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.208 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.208 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.208 fio-3.35 00:34:39.208 Starting 4 threads 00:34:40.596 00:34:40.596 job0: (groupid=0, jobs=1): err= 0: pid=3803402: Mon Dec 9 11:49:32 2024 00:34:40.596 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:40.596 slat (nsec): min=6791, max=66023, avg=24284.74, stdev=7622.12 00:34:40.596 clat (usec): min=421, max=41436, avg=968.43, stdev=3100.66 00:34:40.596 lat (usec): min=430, max=41502, avg=992.72, stdev=3101.92 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 453], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 644], 00:34:40.596 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 750], 60.00th=[ 766], 00:34:40.596 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 898], 00:34:40.596 | 99.00th=[ 1029], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:40.596 | 99.99th=[41681] 00:34:40.596 write: IOPS=814, BW=3257KiB/s (3335kB/s)(3260KiB/1001msec); 0 zone resets 00:34:40.596 slat (nsec): min=9457, max=55311, avg=30991.10, stdev=9609.92 00:34:40.596 clat (usec): min=138, max=938, avg=557.84, stdev=129.50 00:34:40.596 lat (usec): min=147, max=973, avg=588.83, stdev=132.46 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 265], 5.00th=[ 363], 10.00th=[ 388], 20.00th=[ 453], 00:34:40.596 | 30.00th=[ 478], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 586], 00:34:40.596 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 750], 00:34:40.596 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 938], 00:34:40.596 | 99.99th=[ 938] 00:34:40.596 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.596 lat (usec) : 250=0.53%, 500=22.61%, 750=54.79%, 1000=21.55% 00:34:40.596 lat (msec) : 2=0.30%, 50=0.23% 00:34:40.596 cpu : usr=2.40%, sys=4.10%, ctx=1329, majf=0, minf=1 00:34:40.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 issued rwts: total=512,815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.596 job1: (groupid=0, jobs=1): err= 0: pid=3803403: Mon Dec 9 11:49:32 2024 00:34:40.596 read: IOPS=15, BW=62.8KiB/s (64.3kB/s)(64.0KiB/1019msec) 00:34:40.596 slat (nsec): min=24795, max=25779, avg=25201.88, stdev=225.90 00:34:40.596 clat (usec): min=40881, max=42016, avg=41762.64, stdev=383.08 00:34:40.596 lat (usec): min=40907, max=42042, avg=41787.84, stdev=383.02 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:34:40.596 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:40.596 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:40.596 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:40.596 | 99.99th=[42206] 00:34:40.596 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:34:40.596 slat (nsec): min=9317, max=67908, avg=28845.27, stdev=7982.45 00:34:40.596 clat (usec): min=302, max=965, avg=646.22, stdev=117.43 00:34:40.596 lat (usec): min=312, max=996, avg=675.06, stdev=119.53 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 347], 5.00th=[ 424], 10.00th=[ 494], 20.00th=[ 562], 00:34:40.596 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 685], 00:34:40.596 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 832], 00:34:40.596 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:34:40.596 | 99.99th=[ 963] 00:34:40.596 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.596 lat (usec) : 500=10.61%, 750=69.13%, 1000=17.23% 00:34:40.596 lat (msec) : 50=3.03% 00:34:40.596 cpu : usr=0.88%, sys=1.38%, ctx=528, majf=0, minf=2 00:34:40.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.596 job2: (groupid=0, jobs=1): err= 0: pid=3803404: Mon Dec 9 11:49:32 2024 00:34:40.596 read: IOPS=20, BW=81.6KiB/s (83.6kB/s)(84.0KiB/1029msec) 00:34:40.596 slat (nsec): min=24467, max=30097, avg=25622.81, stdev=1060.78 00:34:40.596 clat (usec): min=698, max=42300, avg=38071.40, stdev=12359.73 00:34:40.596 lat (usec): min=724, max=42326, avg=38097.02, stdev=12358.94 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 701], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:34:40.596 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:40.596 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:40.596 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:40.596 | 99.99th=[42206] 00:34:40.596 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:34:40.596 slat (nsec): min=9835, max=47672, avg=24369.15, stdev=10399.23 00:34:40.596 clat (usec): min=121, max=913, avg=415.19, stdev=201.76 00:34:40.596 lat (usec): min=132, max=944, avg=439.56, stdev=208.84 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 174], 00:34:40.596 | 30.00th=[ 249], 40.00th=[ 355], 50.00th=[ 416], 60.00th=[ 494], 00:34:40.596 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 693], 00:34:40.596 | 99.00th=[ 750], 99.50th=[ 799], 99.90th=[ 914], 99.95th=[ 914], 00:34:40.596 | 99.99th=[ 914] 00:34:40.596 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.596 lat (usec) : 250=28.89%, 500=29.08%, 750=37.15%, 1000=1.13% 00:34:40.596 lat (msec) : 2=0.19%, 50=3.56% 00:34:40.596 cpu : usr=0.49%, sys=1.36%, ctx=533, majf=0, minf=2 00:34:40.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.596 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.596 job3: (groupid=0, jobs=1): err= 0: pid=3803405: Mon Dec 9 11:49:32 2024 00:34:40.596 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:34:40.596 slat (nsec): min=25414, max=28534, avg=25904.76, stdev=659.88 00:34:40.596 clat (usec): min=694, max=42053, avg=34095.94, stdev=16477.34 00:34:40.596 lat (usec): min=720, max=42078, avg=34121.84, stdev=16477.00 00:34:40.596 clat percentiles (usec): 00:34:40.596 | 1.00th=[ 693], 5.00th=[ 840], 10.00th=[ 1004], 20.00th=[40633], 00:34:40.596 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:40.596 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:40.596 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:40.596 | 99.99th=[42206] 00:34:40.597 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:34:40.597 slat (usec): min=9, max=106, avg=31.61, stdev= 7.55 00:34:40.597 clat (usec): min=136, max=971, avg=535.73, stdev=137.11 00:34:40.597 lat (usec): min=147, max=987, avg=567.35, stdev=137.68 00:34:40.597 clat percentiles (usec): 00:34:40.597 | 1.00th=[ 178], 5.00th=[ 285], 10.00th=[ 347], 20.00th=[ 424], 00:34:40.597 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 586], 00:34:40.597 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 742], 00:34:40.597 | 99.00th=[ 799], 99.50th=[ 840], 99.90th=[ 971], 99.95th=[ 971], 00:34:40.597 | 99.99th=[ 971] 00:34:40.597 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.597 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.597 lat (usec) : 250=1.88%, 500=32.46%, 750=57.60%, 1000=4.50% 00:34:40.597 lat (msec) : 2=0.38%, 50=3.19% 00:34:40.597 cpu : usr=0.49%, sys=1.88%, ctx=534, majf=0, minf=1 00:34:40.597 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.597 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.597 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.597 00:34:40.597 Run status group 0 (all jobs): 00:34:40.597 READ: bw=2216KiB/s (2269kB/s), 62.8KiB/s-2046KiB/s (64.3kB/s-2095kB/s), io=2280KiB (2335kB), run=1001-1029msec 00:34:40.597 WRITE: bw=9139KiB/s (9358kB/s), 1990KiB/s-3257KiB/s (2038kB/s-3335kB/s), io=9404KiB (9630kB), run=1001-1029msec 00:34:40.597 00:34:40.597 Disk stats (read/write): 00:34:40.597 nvme0n1: ios=525/512, merge=0/0, ticks=1443/241, in_queue=1684, util=96.69% 00:34:40.597 nvme0n2: ios=50/512, merge=0/0, ticks=504/318, in_queue=822, util=88.28% 00:34:40.597 nvme0n3: ios=21/512, merge=0/0, ticks=601/205, in_queue=806, util=88.50% 00:34:40.597 nvme0n4: ios=16/512, merge=0/0, ticks=507/255, in_queue=762, util=89.53% 00:34:40.597 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:40.597 [global] 00:34:40.597 thread=1 00:34:40.597 invalidate=1 00:34:40.597 rw=write 00:34:40.597 time_based=1 00:34:40.597 runtime=1 00:34:40.597 ioengine=libaio 00:34:40.597 direct=1 00:34:40.597 bs=4096 00:34:40.597 iodepth=128 00:34:40.597 norandommap=0 00:34:40.597 numjobs=1 00:34:40.597 00:34:40.597 verify_dump=1 00:34:40.597 verify_backlog=512 00:34:40.597 verify_state_save=0 00:34:40.597 do_verify=1 00:34:40.597 verify=crc32c-intel 00:34:40.597 [job0] 00:34:40.597 filename=/dev/nvme0n1 00:34:40.597 [job1] 00:34:40.597 filename=/dev/nvme0n2 00:34:40.597 [job2] 00:34:40.597 filename=/dev/nvme0n3 00:34:40.597 [job3] 00:34:40.597 filename=/dev/nvme0n4 00:34:40.597 Could not set queue depth (nvme0n1) 00:34:40.597 Could not set queue depth (nvme0n2) 00:34:40.597 Could not set queue depth (nvme0n3) 00:34:40.597 Could not set queue depth (nvme0n4) 00:34:41.164 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:41.164 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:41.164 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:41.164 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:41.164 fio-3.35 00:34:41.164 Starting 4 threads 00:34:42.108 00:34:42.108 job0: (groupid=0, jobs=1): err= 0: pid=3803919: Mon Dec 9 11:49:34 2024 00:34:42.108 read: IOPS=6441, BW=25.2MiB/s (26.4MB/s)(25.2MiB/1003msec) 00:34:42.108 slat (nsec): min=924, max=8149.2k, avg=68165.69, stdev=487428.20 00:34:42.108 clat (usec): min=1567, max=28814, avg=8469.99, stdev=3211.06 00:34:42.108 lat (usec): min=3470, max=28817, avg=8538.15, stdev=3249.65 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6390], 00:34:42.108 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 8094], 00:34:42.108 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[12256], 95.00th=[14877], 00:34:42.108 | 99.00th=[21890], 99.50th=[24773], 99.90th=[25560], 99.95th=[28705], 00:34:42.108 | 99.99th=[28705] 00:34:42.108 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:34:42.108 slat (nsec): min=1624, max=7716.9k, avg=78894.67, stdev=428608.83 00:34:42.108 clat (usec): min=1179, max=30422, avg=10880.87, stdev=6289.33 00:34:42.108 lat (usec): min=1189, max=30438, avg=10959.76, stdev=6327.60 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 3359], 5.00th=[ 4113], 10.00th=[ 5014], 20.00th=[ 5800], 00:34:42.108 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 8455], 60.00th=[11076], 00:34:42.108 | 70.00th=[12911], 80.00th=[17957], 90.00th=[20579], 95.00th=[22938], 00:34:42.108 | 99.00th=[26870], 99.50th=[27919], 99.90th=[30278], 99.95th=[30540], 00:34:42.108 | 99.99th=[30540] 00:34:42.108 bw ( KiB/s): min=24319, max=28880, per=30.44%, avg=26599.50, stdev=3225.11, samples=2 00:34:42.108 iops : min= 6079, max= 7220, avg=6649.50, stdev=806.81, samples=2 00:34:42.108 lat (msec) : 2=0.14%, 4=2.39%, 10=65.62%, 20=25.21%, 50=6.64% 00:34:42.108 cpu : usr=4.49%, sys=6.79%, ctx=548, majf=0, minf=1 00:34:42.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:42.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:42.108 issued rwts: total=6461,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:42.108 job1: (groupid=0, jobs=1): err= 0: pid=3803920: Mon Dec 9 11:49:34 2024 00:34:42.108 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:34:42.108 slat (nsec): min=996, max=12739k, avg=79279.64, stdev=623131.43 00:34:42.108 clat (usec): min=3307, max=34268, avg=10387.47, stdev=4305.72 00:34:42.108 lat (usec): min=3315, max=34273, avg=10466.75, stdev=4341.31 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6915], 00:34:42.108 | 30.00th=[ 7373], 40.00th=[ 8225], 50.00th=[ 9110], 60.00th=[10421], 00:34:42.108 | 70.00th=[12256], 80.00th=[13829], 90.00th=[16712], 95.00th=[18220], 00:34:42.108 | 99.00th=[25822], 99.50th=[26084], 99.90th=[32113], 99.95th=[32113], 00:34:42.108 | 99.99th=[34341] 00:34:42.108 write: IOPS=5900, BW=23.0MiB/s (24.2MB/s)(23.3MiB/1011msec); 0 zone resets 00:34:42.108 slat (nsec): min=1694, max=14557k, avg=86982.13, stdev=561250.78 00:34:42.108 clat (usec): min=1173, max=58291, avg=11661.75, stdev=8596.69 00:34:42.108 lat (usec): min=1182, max=58300, avg=11748.73, stdev=8645.56 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 3949], 5.00th=[ 4424], 10.00th=[ 5342], 20.00th=[ 6259], 00:34:42.108 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 9110], 60.00th=[11600], 00:34:42.108 | 70.00th=[12649], 80.00th=[13566], 90.00th=[20317], 95.00th=[32113], 00:34:42.108 | 99.00th=[50070], 99.50th=[55313], 99.90th=[57410], 99.95th=[58459], 00:34:42.108 | 99.99th=[58459] 00:34:42.108 bw ( KiB/s): min=22128, max=24576, per=26.73%, avg=23352.00, stdev=1731.00, samples=2 00:34:42.108 iops : min= 5532, max= 6144, avg=5838.00, stdev=432.75, samples=2 00:34:42.108 lat (msec) : 2=0.02%, 4=0.81%, 10=53.71%, 20=38.91%, 50=5.96% 00:34:42.108 lat (msec) : 100=0.59% 00:34:42.108 cpu : usr=4.26%, sys=6.73%, ctx=400, majf=0, minf=1 00:34:42.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:42.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:42.108 issued rwts: total=5632,5965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:42.108 job2: (groupid=0, jobs=1): err= 0: pid=3803928: Mon Dec 9 11:49:34 2024 00:34:42.108 read: IOPS=4586, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:34:42.108 slat (nsec): min=994, max=9041.5k, avg=85334.69, stdev=558758.78 00:34:42.108 clat (usec): min=1605, max=33066, avg=10437.75, stdev=4054.40 00:34:42.108 lat (usec): min=5655, max=33074, avg=10523.09, stdev=4107.77 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 7898], 00:34:42.108 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:34:42.108 | 70.00th=[10421], 80.00th=[11469], 90.00th=[16909], 95.00th=[19530], 00:34:42.108 | 99.00th=[24511], 99.50th=[26870], 99.90th=[33162], 99.95th=[33162], 00:34:42.108 | 99.99th=[33162] 00:34:42.108 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:34:42.108 slat (nsec): min=1721, max=11910k, avg=112576.13, stdev=644838.21 00:34:42.108 clat (usec): min=831, max=55744, avg=15512.68, stdev=12252.82 00:34:42.108 lat (usec): min=860, max=55753, avg=15625.25, stdev=12341.31 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 4490], 5.00th=[ 5800], 10.00th=[ 7177], 20.00th=[ 7635], 00:34:42.108 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[11076], 60.00th=[11994], 00:34:42.108 | 70.00th=[14877], 80.00th=[20317], 90.00th=[38011], 95.00th=[45351], 00:34:42.108 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:34:42.108 | 99.99th=[55837] 00:34:42.108 bw ( KiB/s): min=19504, max=20480, per=22.88%, avg=19992.00, stdev=690.14, samples=2 00:34:42.108 iops : min= 4876, max= 5120, avg=4998.00, stdev=172.53, samples=2 00:34:42.108 lat (usec) : 1000=0.03% 00:34:42.108 lat (msec) : 2=0.38%, 4=0.08%, 10=57.04%, 20=29.46%, 50=11.54% 00:34:42.108 lat (msec) : 100=1.47% 00:34:42.108 cpu : usr=4.38%, sys=5.27%, ctx=377, majf=0, minf=1 00:34:42.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:42.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:42.108 issued rwts: total=4614,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:42.108 job3: (groupid=0, jobs=1): err= 0: pid=3803929: Mon Dec 9 11:49:34 2024 00:34:42.108 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:34:42.108 slat (nsec): min=972, max=14564k, avg=110972.31, stdev=740940.97 00:34:42.108 clat (usec): min=3065, max=65239, avg=13399.42, stdev=7889.63 00:34:42.108 lat (usec): min=3073, max=65249, avg=13510.39, stdev=7958.71 00:34:42.108 clat percentiles (usec): 00:34:42.108 | 1.00th=[ 5407], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8586], 00:34:42.108 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[11469], 60.00th=[12649], 00:34:42.108 | 70.00th=[14746], 80.00th=[16188], 90.00th=[20055], 95.00th=[27132], 00:34:42.109 | 99.00th=[52167], 99.50th=[59507], 99.90th=[65274], 99.95th=[65274], 00:34:42.109 | 99.99th=[65274] 00:34:42.109 write: IOPS=4294, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1011msec); 0 zone resets 00:34:42.109 slat (nsec): min=1639, max=12003k, avg=119168.27, stdev=704293.65 00:34:42.109 clat (usec): min=1333, max=67045, avg=16902.83, stdev=13716.05 00:34:42.109 lat (usec): min=1344, max=67053, avg=17022.00, stdev=13805.02 00:34:42.109 clat percentiles (usec): 00:34:42.109 | 1.00th=[ 3458], 5.00th=[ 6063], 10.00th=[ 7832], 20.00th=[ 8455], 00:34:42.109 | 30.00th=[10028], 40.00th=[11994], 50.00th=[12649], 60.00th=[13042], 00:34:42.109 | 70.00th=[14484], 80.00th=[17695], 90.00th=[41681], 95.00th=[51643], 00:34:42.109 | 99.00th=[65274], 99.50th=[65799], 99.90th=[66847], 99.95th=[66847], 00:34:42.109 | 99.99th=[66847] 00:34:42.109 bw ( KiB/s): min=16208, max=17512, per=19.30%, avg=16860.00, stdev=922.07, samples=2 00:34:42.109 iops : min= 4052, max= 4378, avg=4215.00, stdev=230.52, samples=2 00:34:42.109 lat (msec) : 2=0.09%, 4=0.81%, 10=34.16%, 20=51.36%, 50=9.91% 00:34:42.109 lat (msec) : 100=3.67% 00:34:42.109 cpu : usr=2.87%, sys=5.54%, ctx=383, majf=0, minf=1 00:34:42.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:42.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:42.109 issued rwts: total=4096,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:42.109 00:34:42.109 Run status group 0 (all jobs): 00:34:42.109 READ: bw=80.4MiB/s (84.3MB/s), 15.8MiB/s-25.2MiB/s (16.6MB/s-26.4MB/s), io=81.3MiB (85.2MB), run=1003-1011msec 00:34:42.109 WRITE: bw=85.3MiB/s (89.5MB/s), 16.8MiB/s-25.9MiB/s (17.6MB/s-27.2MB/s), io=86.3MiB (90.5MB), run=1003-1011msec 00:34:42.109 00:34:42.109 Disk stats (read/write): 00:34:42.109 nvme0n1: ios=5170/5605, merge=0/0, ticks=41346/59974, in_queue=101320, util=87.58% 00:34:42.109 nvme0n2: ios=5102/5120, merge=0/0, ticks=48497/53177, in_queue=101674, util=96.74% 00:34:42.109 nvme0n3: ios=3632/3737, merge=0/0, ticks=19013/31650, in_queue=50663, util=92.60% 00:34:42.109 nvme0n4: ios=3113/3535, merge=0/0, ticks=38541/62427, in_queue=100968, util=92.41% 00:34:42.109 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:42.370 [global] 00:34:42.370 thread=1 00:34:42.370 invalidate=1 00:34:42.370 rw=randwrite 00:34:42.370 time_based=1 00:34:42.370 runtime=1 00:34:42.370 ioengine=libaio 00:34:42.370 direct=1 00:34:42.370 bs=4096 00:34:42.370 iodepth=128 00:34:42.370 norandommap=0 00:34:42.370 numjobs=1 00:34:42.370 00:34:42.370 verify_dump=1 00:34:42.370 verify_backlog=512 00:34:42.370 verify_state_save=0 00:34:42.370 do_verify=1 00:34:42.370 verify=crc32c-intel 00:34:42.370 [job0] 00:34:42.370 filename=/dev/nvme0n1 00:34:42.370 [job1] 00:34:42.370 filename=/dev/nvme0n2 00:34:42.370 [job2] 00:34:42.370 filename=/dev/nvme0n3 00:34:42.370 [job3] 00:34:42.370 filename=/dev/nvme0n4 00:34:42.370 Could not set queue depth (nvme0n1) 00:34:42.370 Could not set queue depth (nvme0n2) 00:34:42.370 Could not set queue depth (nvme0n3) 00:34:42.370 Could not set queue depth (nvme0n4) 00:34:42.631 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.631 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.631 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.631 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.631 fio-3.35 00:34:42.631 Starting 4 threads 00:34:44.016 00:34:44.016 job0: (groupid=0, jobs=1): err= 0: pid=3804446: Mon Dec 9 11:49:35 2024 00:34:44.016 read: IOPS=6168, BW=24.1MiB/s (25.3MB/s)(24.2MiB/1006msec) 00:34:44.016 slat (nsec): min=937, max=15562k, avg=79907.72, stdev=678517.85 00:34:44.016 clat (usec): min=3059, max=31980, avg=10665.56, stdev=3701.92 00:34:44.016 lat (usec): min=3064, max=32004, avg=10745.47, stdev=3759.29 00:34:44.016 clat percentiles (usec): 00:34:44.016 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7767], 00:34:44.016 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10290], 00:34:44.016 | 70.00th=[11600], 80.00th=[13960], 90.00th=[16188], 95.00th=[17433], 00:34:44.016 | 99.00th=[21890], 99.50th=[21890], 99.90th=[26346], 99.95th=[28181], 00:34:44.016 | 99.99th=[31851] 00:34:44.016 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:34:44.016 slat (nsec): min=1593, max=8313.6k, avg=69496.21, stdev=488493.87 00:34:44.016 clat (usec): min=529, max=35239, avg=9241.20, stdev=4383.29 00:34:44.016 lat (usec): min=541, max=35246, avg=9310.69, stdev=4409.20 00:34:44.016 clat percentiles (usec): 00:34:44.016 | 1.00th=[ 1713], 5.00th=[ 4178], 10.00th=[ 4948], 20.00th=[ 6063], 00:34:44.016 | 30.00th=[ 6915], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:44.016 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[14091], 95.00th=[16450], 00:34:44.016 | 99.00th=[30540], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:34:44.016 | 99.99th=[35390] 00:34:44.016 bw ( KiB/s): min=24576, max=28152, per=27.45%, avg=26364.00, stdev=2528.61, samples=2 00:34:44.016 iops : min= 6144, max= 7038, avg=6591.00, stdev=632.15, samples=2 00:34:44.016 lat (usec) : 750=0.10%, 1000=0.01% 00:34:44.016 lat (msec) : 2=0.47%, 4=1.56%, 10=62.38%, 20=33.22%, 50=2.26% 00:34:44.016 cpu : usr=4.08%, sys=6.86%, ctx=480, majf=0, minf=2 00:34:44.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:44.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.016 issued rwts: total=6206,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.016 job1: (groupid=0, jobs=1): err= 0: pid=3804447: Mon Dec 9 11:49:35 2024 00:34:44.016 read: IOPS=6022, BW=23.5MiB/s (24.7MB/s)(24.6MiB/1045msec) 00:34:44.016 slat (nsec): min=892, max=20201k, avg=84081.16, stdev=732469.99 00:34:44.016 clat (usec): min=1468, max=55251, avg=11973.03, stdev=9022.82 00:34:44.016 lat (usec): min=1493, max=55256, avg=12057.11, stdev=9068.12 00:34:44.016 clat percentiles (usec): 00:34:44.016 | 1.00th=[ 3556], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6521], 00:34:44.016 | 30.00th=[ 7504], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10028], 00:34:44.016 | 70.00th=[11076], 80.00th=[13042], 90.00th=[21627], 95.00th=[34341], 00:34:44.016 | 99.00th=[50070], 99.50th=[51119], 99.90th=[55313], 99.95th=[55313], 00:34:44.016 | 99.99th=[55313] 00:34:44.016 write: IOPS=6369, BW=24.9MiB/s (26.1MB/s)(26.0MiB/1045msec); 0 zone resets 00:34:44.016 slat (nsec): min=1485, max=5555.4k, avg=64215.27, stdev=421217.04 00:34:44.016 clat (usec): min=935, max=29052, avg=8599.84, stdev=3875.35 00:34:44.017 lat (usec): min=945, max=29065, avg=8664.05, stdev=3910.14 00:34:44.017 clat percentiles (usec): 00:34:44.017 | 1.00th=[ 1647], 5.00th=[ 3621], 10.00th=[ 4555], 20.00th=[ 6128], 00:34:44.017 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8717], 60.00th=[ 9241], 00:34:44.017 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11469], 95.00th=[13698], 00:34:44.017 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:34:44.017 | 99.99th=[28967] 00:34:44.017 bw ( KiB/s): min=24576, max=28672, per=27.72%, avg=26624.00, stdev=2896.31, samples=2 00:34:44.017 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:44.017 lat (usec) : 1000=0.02% 00:34:44.017 lat (msec) : 2=0.90%, 4=3.12%, 10=64.29%, 20=24.53%, 50=6.75% 00:34:44.017 lat (msec) : 100=0.40% 00:34:44.017 cpu : usr=4.98%, sys=5.27%, ctx=478, majf=0, minf=1 00:34:44.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.017 issued rwts: total=6294,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.017 job2: (groupid=0, jobs=1): err= 0: pid=3804448: Mon Dec 9 11:49:35 2024 00:34:44.017 read: IOPS=6226, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1005msec) 00:34:44.017 slat (nsec): min=999, max=11165k, avg=76756.23, stdev=592345.80 00:34:44.017 clat (usec): min=2426, max=26988, avg=9981.03, stdev=3109.81 00:34:44.017 lat (usec): min=2467, max=27834, avg=10057.79, stdev=3152.87 00:34:44.017 clat percentiles (usec): 00:34:44.017 | 1.00th=[ 3949], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8029], 00:34:44.017 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9896], 00:34:44.017 | 70.00th=[10552], 80.00th=[11338], 90.00th=[13698], 95.00th=[16909], 00:34:44.017 | 99.00th=[20055], 99.50th=[22938], 99.90th=[22938], 99.95th=[23462], 00:34:44.017 | 99.99th=[26870] 00:34:44.017 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:34:44.017 slat (nsec): min=1608, max=8889.7k, avg=68567.96, stdev=523913.98 00:34:44.017 clat (usec): min=1013, max=28813, avg=9735.48, stdev=3633.81 00:34:44.017 lat (usec): min=1029, max=28820, avg=9804.05, stdev=3667.96 00:34:44.017 clat percentiles (usec): 00:34:44.017 | 1.00th=[ 2040], 5.00th=[ 4817], 10.00th=[ 6194], 20.00th=[ 7242], 00:34:44.017 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:34:44.017 | 70.00th=[10028], 80.00th=[11994], 90.00th=[15008], 95.00th=[16319], 00:34:44.017 | 99.00th=[21890], 99.50th=[24511], 99.90th=[28181], 99.95th=[28705], 00:34:44.017 | 99.99th=[28705] 00:34:44.017 bw ( KiB/s): min=24576, max=28560, per=27.67%, avg=26568.00, stdev=2817.11, samples=2 00:34:44.017 iops : min= 6144, max= 7140, avg=6642.00, stdev=704.28, samples=2 00:34:44.017 lat (msec) : 2=0.50%, 4=1.79%, 10=64.02%, 20=31.89%, 50=1.80% 00:34:44.017 cpu : usr=5.38%, sys=6.57%, ctx=345, majf=0, minf=1 00:34:44.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.017 issued rwts: total=6258,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.017 job3: (groupid=0, jobs=1): err= 0: pid=3804449: Mon Dec 9 11:49:35 2024 00:34:44.017 read: IOPS=4668, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1002msec) 00:34:44.017 slat (nsec): min=978, max=18412k, avg=97818.34, stdev=735258.40 00:34:44.017 clat (usec): min=931, max=52919, avg=13046.12, stdev=7564.41 00:34:44.017 lat (usec): min=2852, max=53009, avg=13143.94, stdev=7618.25 00:34:44.017 clat percentiles (usec): 00:34:44.017 | 1.00th=[ 3621], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 8094], 00:34:44.017 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[12256], 00:34:44.017 | 70.00th=[13829], 80.00th=[16581], 90.00th=[19792], 95.00th=[26084], 00:34:44.017 | 99.00th=[44303], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:34:44.017 | 99.99th=[52691] 00:34:44.017 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:34:44.017 slat (nsec): min=1593, max=16566k, avg=99939.41, stdev=720675.30 00:34:44.017 clat (usec): min=2224, max=50900, avg=12797.78, stdev=7131.02 00:34:44.017 lat (usec): min=2234, max=50933, avg=12897.72, stdev=7206.25 00:34:44.017 clat percentiles (usec): 00:34:44.017 | 1.00th=[ 3392], 5.00th=[ 6259], 10.00th=[ 7439], 20.00th=[ 8717], 00:34:44.017 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10552], 00:34:44.017 | 70.00th=[14615], 80.00th=[15664], 90.00th=[22414], 95.00th=[29230], 00:34:44.017 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[50070], 00:34:44.017 | 99.99th=[51119] 00:34:44.017 bw ( KiB/s): min=16720, max=23784, per=21.09%, avg=20252.00, stdev=4995.00, samples=2 00:34:44.017 iops : min= 4180, max= 5946, avg=5063.00, stdev=1248.75, samples=2 00:34:44.017 lat (usec) : 1000=0.01% 00:34:44.017 lat (msec) : 4=1.50%, 10=46.28%, 20=41.44%, 50=10.52%, 100=0.24% 00:34:44.017 cpu : usr=3.60%, sys=5.00%, ctx=334, majf=0, minf=1 00:34:44.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.017 issued rwts: total=4678,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.017 00:34:44.017 Run status group 0 (all jobs): 00:34:44.017 READ: bw=87.6MiB/s (91.9MB/s), 18.2MiB/s-24.3MiB/s (19.1MB/s-25.5MB/s), io=91.5MiB (96.0MB), run=1002-1045msec 00:34:44.017 WRITE: bw=93.8MiB/s (98.3MB/s), 20.0MiB/s-25.9MiB/s (20.9MB/s-27.1MB/s), io=98.0MiB (103MB), run=1002-1045msec 00:34:44.017 00:34:44.017 Disk stats (read/write): 00:34:44.017 nvme0n1: ios=5529/5632, merge=0/0, ticks=54081/47024, in_queue=101105, util=95.99% 00:34:44.017 nvme0n2: ios=5154/5632, merge=0/0, ticks=33885/26222, in_queue=60107, util=87.35% 00:34:44.017 nvme0n3: ios=5174/5531, merge=0/0, ticks=30910/29263, in_queue=60173, util=100.00% 00:34:44.017 nvme0n4: ios=3883/4096, merge=0/0, ticks=24221/23559, in_queue=47780, util=100.00% 00:34:44.017 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:44.017 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3804548 00:34:44.017 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:44.017 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:44.017 [global] 00:34:44.017 thread=1 00:34:44.017 invalidate=1 00:34:44.017 rw=read 00:34:44.017 time_based=1 00:34:44.017 runtime=10 00:34:44.017 ioengine=libaio 00:34:44.017 direct=1 00:34:44.017 bs=4096 00:34:44.017 iodepth=1 00:34:44.017 norandommap=1 00:34:44.017 numjobs=1 00:34:44.017 00:34:44.017 [job0] 00:34:44.017 filename=/dev/nvme0n1 00:34:44.017 [job1] 00:34:44.017 filename=/dev/nvme0n2 00:34:44.017 [job2] 00:34:44.017 filename=/dev/nvme0n3 00:34:44.017 [job3] 00:34:44.017 filename=/dev/nvme0n4 00:34:44.017 Could not set queue depth (nvme0n1) 00:34:44.017 Could not set queue depth (nvme0n2) 00:34:44.017 Could not set queue depth (nvme0n3) 00:34:44.017 Could not set queue depth (nvme0n4) 00:34:44.277 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.277 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.277 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.277 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:44.277 fio-3.35 00:34:44.277 Starting 4 threads 00:34:46.820 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:47.080 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:47.080 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=286720, buflen=4096 00:34:47.080 fio: pid=3804946, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:47.341 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:47.341 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:47.341 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:34:47.341 fio: pid=3804940, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:47.602 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1122304, buflen=4096 00:34:47.602 fio: pid=3804905, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:47.602 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:47.602 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:47.602 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:47.602 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:47.602 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=753664, buflen=4096 00:34:47.602 fio: pid=3804921, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:34:47.602 00:34:47.602 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3804905: Mon Dec 9 11:49:39 2024 00:34:47.602 read: IOPS=92, BW=370KiB/s (379kB/s)(1096KiB/2964msec) 00:34:47.602 slat (usec): min=8, max=7074, avg=76.15, stdev=590.05 00:34:47.602 clat (usec): min=595, max=42125, avg=10625.23, stdev=17282.75 00:34:47.602 lat (usec): min=620, max=42149, avg=10701.57, stdev=17265.03 00:34:47.602 clat percentiles (usec): 00:34:47.602 | 1.00th=[ 750], 5.00th=[ 947], 10.00th=[ 1004], 20.00th=[ 1057], 00:34:47.602 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:34:47.602 | 70.00th=[ 1221], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:47.602 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:47.602 | 99.99th=[42206] 00:34:47.602 bw ( KiB/s): min= 88, max= 296, per=19.15%, avg=144.00, stdev=86.90, samples=5 00:34:47.602 iops : min= 22, max= 74, avg=36.00, stdev=21.73, samples=5 00:34:47.602 lat (usec) : 750=0.73%, 1000=9.09% 00:34:47.602 lat (msec) : 2=66.55%, 50=23.27% 00:34:47.602 cpu : usr=0.00%, sys=0.37%, ctx=277, majf=0, minf=1 00:34:47.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 issued rwts: total=275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.602 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3804921: Mon Dec 9 11:49:39 2024 00:34:47.602 read: IOPS=58, BW=233KiB/s (238kB/s)(736KiB/3165msec) 00:34:47.602 slat (usec): min=8, max=16612, avg=276.80, stdev=1841.56 00:34:47.602 clat (usec): min=657, max=42090, avg=16786.09, stdev=19977.37 00:34:47.602 lat (usec): min=761, max=42115, avg=17029.74, stdev=19884.72 00:34:47.602 clat percentiles (usec): 00:34:47.602 | 1.00th=[ 734], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 979], 00:34:47.602 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1156], 00:34:47.602 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:47.602 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:47.602 | 99.99th=[42206] 00:34:47.602 bw ( KiB/s): min= 96, max= 795, per=28.19%, avg=212.50, stdev=285.37, samples=6 00:34:47.602 iops : min= 24, max= 198, avg=53.00, stdev=71.04, samples=6 00:34:47.602 lat (usec) : 750=1.08%, 1000=27.57% 00:34:47.602 lat (msec) : 2=32.43%, 50=38.38% 00:34:47.602 cpu : usr=0.13%, sys=0.28%, ctx=189, majf=0, minf=2 00:34:47.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.602 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3804940: Mon Dec 9 11:49:39 2024 00:34:47.602 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(268KiB/2789msec) 00:34:47.602 slat (nsec): min=27939, max=41636, avg=28715.84, stdev=1738.22 00:34:47.602 clat (usec): min=1018, max=43092, avg=41238.19, stdev=5003.96 00:34:47.602 lat (usec): min=1060, max=43121, avg=41266.83, stdev=5002.35 00:34:47.602 clat percentiles (usec): 00:34:47.602 | 1.00th=[ 1020], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:47.602 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:47.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:47.602 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:47.602 | 99.99th=[43254] 00:34:47.602 bw ( KiB/s): min= 96, max= 96, per=12.77%, avg=96.00, stdev= 0.00, samples=5 00:34:47.602 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:47.602 lat (msec) : 2=1.47%, 50=97.06% 00:34:47.602 cpu : usr=0.14%, sys=0.00%, ctx=69, majf=0, minf=2 00:34:47.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.602 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3804946: Mon Dec 9 11:49:39 2024 00:34:47.602 read: IOPS=26, BW=106KiB/s (109kB/s)(280KiB/2634msec) 00:34:47.602 slat (nsec): min=8736, max=61823, avg=25877.87, stdev=6281.60 00:34:47.602 clat (usec): min=858, max=42105, avg=37219.19, stdev=13101.34 00:34:47.602 lat (usec): min=868, max=42131, avg=37245.06, stdev=13102.64 00:34:47.602 clat percentiles (usec): 00:34:47.602 | 1.00th=[ 857], 5.00th=[ 1057], 10.00th=[ 1057], 20.00th=[41681], 00:34:47.602 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:47.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:47.602 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:47.602 | 99.99th=[42206] 00:34:47.602 bw ( KiB/s): min= 96, max= 144, per=14.23%, avg=107.20, stdev=20.86, samples=5 00:34:47.602 iops : min= 24, max= 36, avg=26.80, stdev= 5.22, samples=5 00:34:47.602 lat (usec) : 1000=2.82% 00:34:47.602 lat (msec) : 2=8.45%, 50=87.32% 00:34:47.602 cpu : usr=0.15%, sys=0.00%, ctx=72, majf=0, minf=2 00:34:47.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.602 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.602 00:34:47.602 Run status group 0 (all jobs): 00:34:47.602 READ: bw=752KiB/s (770kB/s), 96.1KiB/s-370KiB/s (98.4kB/s-379kB/s), io=2380KiB (2437kB), run=2634-3165msec 00:34:47.602 00:34:47.602 Disk stats (read/write): 00:34:47.602 nvme0n1: ios=188/0, merge=0/0, ticks=2813/0, in_queue=2813, util=94.36% 00:34:47.602 nvme0n2: ios=176/0, merge=0/0, ticks=3041/0, in_queue=3041, util=94.49% 00:34:47.602 nvme0n3: ios=92/0, merge=0/0, ticks=3252/0, in_queue=3252, util=99.22% 00:34:47.602 nvme0n4: ios=69/0, merge=0/0, ticks=2566/0, in_queue=2566, util=96.46% 00:34:47.863 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:47.863 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:48.123 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:48.123 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:48.123 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:48.123 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:48.384 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:48.384 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3804548 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:48.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:48.645 nvmf hotplug test: fio failed as expected 00:34:48.645 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.905 rmmod nvme_tcp 00:34:48.905 rmmod nvme_fabrics 00:34:48.905 rmmod nvme_keyring 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3801298 ']' 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3801298 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3801298 ']' 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3801298 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.905 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801298 00:34:48.905 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.905 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.905 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801298' 00:34:48.905 killing process with pid 3801298 00:34:48.905 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3801298 00:34:48.905 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3801298 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.166 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.715 00:34:51.715 real 0m28.016s 00:34:51.715 user 2m17.014s 00:34:51.715 sys 0m11.764s 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:51.715 ************************************ 00:34:51.715 END TEST nvmf_fio_target 00:34:51.715 ************************************ 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:51.715 ************************************ 00:34:51.715 START TEST nvmf_bdevio 00:34:51.715 ************************************ 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:51.715 * Looking for test storage... 00:34:51.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:51.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.715 --rc genhtml_branch_coverage=1 00:34:51.715 --rc genhtml_function_coverage=1 00:34:51.715 --rc genhtml_legend=1 00:34:51.715 --rc geninfo_all_blocks=1 00:34:51.715 --rc geninfo_unexecuted_blocks=1 00:34:51.715 00:34:51.715 ' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:51.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.715 --rc genhtml_branch_coverage=1 00:34:51.715 --rc genhtml_function_coverage=1 00:34:51.715 --rc genhtml_legend=1 00:34:51.715 --rc geninfo_all_blocks=1 00:34:51.715 --rc geninfo_unexecuted_blocks=1 00:34:51.715 00:34:51.715 ' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:51.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.715 --rc genhtml_branch_coverage=1 00:34:51.715 --rc genhtml_function_coverage=1 00:34:51.715 --rc genhtml_legend=1 00:34:51.715 --rc geninfo_all_blocks=1 00:34:51.715 --rc geninfo_unexecuted_blocks=1 00:34:51.715 00:34:51.715 ' 00:34:51.715 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:51.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.715 --rc genhtml_branch_coverage=1 00:34:51.715 --rc genhtml_function_coverage=1 00:34:51.715 --rc genhtml_legend=1 00:34:51.715 --rc geninfo_all_blocks=1 00:34:51.716 --rc geninfo_unexecuted_blocks=1 00:34:51.716 00:34:51.716 ' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.716 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:59.866 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:59.866 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:59.866 Found net devices under 0000:31:00.0: cvl_0_0 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:59.866 Found net devices under 0000:31:00.1: cvl_0_1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.866 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:34:59.867 00:34:59.867 --- 10.0.0.2 ping statistics --- 00:34:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.867 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:34:59.867 00:34:59.867 --- 10.0.0.1 ping statistics --- 00:34:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.867 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.867 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3810052 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3810052 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3810052 ']' 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 [2024-12-09 11:49:51.080707] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:59.867 [2024-12-09 11:49:51.081719] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:34:59.867 [2024-12-09 11:49:51.081760] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.867 [2024-12-09 11:49:51.182233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.867 [2024-12-09 11:49:51.229658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.867 [2024-12-09 11:49:51.229703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.867 [2024-12-09 11:49:51.229712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.867 [2024-12-09 11:49:51.229719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.867 [2024-12-09 11:49:51.229725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.867 [2024-12-09 11:49:51.231704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:59.867 [2024-12-09 11:49:51.231875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:59.867 [2024-12-09 11:49:51.232244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:59.867 [2024-12-09 11:49:51.232324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.867 [2024-12-09 11:49:51.318078] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.867 [2024-12-09 11:49:51.319242] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:59.867 [2024-12-09 11:49:51.319389] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:59.867 [2024-12-09 11:49:51.320140] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.867 [2024-12-09 11:49:51.320185] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 [2024-12-09 11:49:51.921286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 Malloc0 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.867 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:59.867 [2024-12-09 11:49:52.013608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:59.867 { 00:34:59.867 "params": { 00:34:59.867 "name": "Nvme$subsystem", 00:34:59.867 "trtype": "$TEST_TRANSPORT", 00:34:59.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.867 "adrfam": "ipv4", 00:34:59.867 "trsvcid": "$NVMF_PORT", 00:34:59.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.867 "hdgst": ${hdgst:-false}, 00:34:59.867 "ddgst": ${ddgst:-false} 00:34:59.867 }, 00:34:59.867 "method": "bdev_nvme_attach_controller" 00:34:59.867 } 00:34:59.867 EOF 00:34:59.867 )") 00:34:59.867 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:00.129 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:00.129 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:00.129 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:00.129 "params": { 00:35:00.129 "name": "Nvme1", 00:35:00.129 "trtype": "tcp", 00:35:00.129 "traddr": "10.0.0.2", 00:35:00.129 "adrfam": "ipv4", 00:35:00.129 "trsvcid": "4420", 00:35:00.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:00.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:00.129 "hdgst": false, 00:35:00.129 "ddgst": false 00:35:00.129 }, 00:35:00.129 "method": "bdev_nvme_attach_controller" 00:35:00.129 }' 00:35:00.129 [2024-12-09 11:49:52.080314] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:35:00.129 [2024-12-09 11:49:52.080393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810099 ] 00:35:00.129 [2024-12-09 11:49:52.160255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:00.129 [2024-12-09 11:49:52.205073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.129 [2024-12-09 11:49:52.205298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.129 [2024-12-09 11:49:52.205101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:00.390 I/O targets: 00:35:00.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:00.390 00:35:00.390 00:35:00.390 CUnit - A unit testing framework for C - Version 2.1-3 00:35:00.390 http://cunit.sourceforge.net/ 00:35:00.390 00:35:00.390 00:35:00.390 Suite: bdevio tests on: Nvme1n1 00:35:00.651 Test: blockdev write read block ...passed 00:35:00.651 Test: blockdev write zeroes read block ...passed 00:35:00.651 Test: blockdev write zeroes read no split ...passed 00:35:00.651 Test: blockdev write zeroes read split ...passed 00:35:00.651 Test: blockdev write zeroes read split partial ...passed 00:35:00.651 Test: blockdev reset ...[2024-12-09 11:49:52.677455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:00.651 [2024-12-09 11:49:52.677523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb60e0 (9): Bad file descriptor 00:35:00.651 [2024-12-09 11:49:52.772241] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:00.651 passed 00:35:00.651 Test: blockdev write read 8 blocks ...passed 00:35:00.912 Test: blockdev write read size > 128k ...passed 00:35:00.912 Test: blockdev write read invalid size ...passed 00:35:00.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:00.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:00.912 Test: blockdev write read max offset ...passed 00:35:00.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:00.912 Test: blockdev writev readv 8 blocks ...passed 00:35:00.912 Test: blockdev writev readv 30 x 1block ...passed 00:35:00.912 Test: blockdev writev readv block ...passed 00:35:00.912 Test: blockdev writev readv size > 128k ...passed 00:35:00.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:00.912 Test: blockdev comparev and writev ...[2024-12-09 11:49:52.996960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.996985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.996996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.997002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.997580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.997588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.997598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.997604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.998142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.998150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.998160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.998165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.998668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.998677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:00.912 [2024-12-09 11:49:52.998686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:00.912 [2024-12-09 11:49:52.998692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:00.912 passed 00:35:01.174 Test: blockdev nvme passthru rw ...passed 00:35:01.174 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:49:53.082893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:01.174 [2024-12-09 11:49:53.082903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:01.174 [2024-12-09 11:49:53.083269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:01.174 [2024-12-09 11:49:53.083276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:01.174 [2024-12-09 11:49:53.083625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:01.174 [2024-12-09 11:49:53.083632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:01.174 [2024-12-09 11:49:53.083987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:01.174 [2024-12-09 11:49:53.083995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:01.174 passed 00:35:01.174 Test: blockdev nvme admin passthru ...passed 00:35:01.174 Test: blockdev copy ...passed 00:35:01.174 00:35:01.174 Run Summary: Type Total Ran Passed Failed Inactive 00:35:01.174 suites 1 1 n/a 0 0 00:35:01.174 tests 23 23 23 0 0 00:35:01.174 asserts 152 152 152 0 n/a 00:35:01.174 00:35:01.174 Elapsed time = 1.273 seconds 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.174 rmmod nvme_tcp 00:35:01.174 rmmod nvme_fabrics 00:35:01.174 rmmod nvme_keyring 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3810052 ']' 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3810052 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3810052 ']' 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3810052 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.174 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3810052 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3810052' 00:35:01.436 killing process with pid 3810052 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3810052 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3810052 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.436 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.698 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.698 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.612 00:35:03.612 real 0m12.320s 00:35:03.612 user 0m10.363s 00:35:03.612 sys 0m6.459s 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.612 ************************************ 00:35:03.612 END TEST nvmf_bdevio 00:35:03.612 ************************************ 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:03.612 00:35:03.612 real 4m57.547s 00:35:03.612 user 10m14.413s 00:35:03.612 sys 2m3.002s 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.612 11:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.612 ************************************ 00:35:03.612 END TEST nvmf_target_core_interrupt_mode 00:35:03.612 ************************************ 00:35:03.612 11:49:55 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:03.612 11:49:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:03.612 11:49:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.612 11:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.875 ************************************ 00:35:03.875 START TEST nvmf_interrupt 00:35:03.875 ************************************ 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:03.875 * Looking for test storage... 00:35:03.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:03.875 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:03.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.876 --rc genhtml_branch_coverage=1 00:35:03.876 --rc genhtml_function_coverage=1 00:35:03.876 --rc genhtml_legend=1 00:35:03.876 --rc geninfo_all_blocks=1 00:35:03.876 --rc geninfo_unexecuted_blocks=1 00:35:03.876 00:35:03.876 ' 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:03.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.876 --rc genhtml_branch_coverage=1 00:35:03.876 --rc genhtml_function_coverage=1 00:35:03.876 --rc genhtml_legend=1 00:35:03.876 --rc geninfo_all_blocks=1 00:35:03.876 --rc geninfo_unexecuted_blocks=1 00:35:03.876 00:35:03.876 ' 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:03.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.876 --rc genhtml_branch_coverage=1 00:35:03.876 --rc genhtml_function_coverage=1 00:35:03.876 --rc genhtml_legend=1 00:35:03.876 --rc geninfo_all_blocks=1 00:35:03.876 --rc geninfo_unexecuted_blocks=1 00:35:03.876 00:35:03.876 ' 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:03.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.876 --rc genhtml_branch_coverage=1 00:35:03.876 --rc genhtml_function_coverage=1 00:35:03.876 --rc genhtml_legend=1 00:35:03.876 --rc geninfo_all_blocks=1 00:35:03.876 --rc geninfo_unexecuted_blocks=1 00:35:03.876 00:35:03.876 ' 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.876 11:49:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:03.876 11:49:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.138 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:04.138 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:04.138 11:49:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:04.139 11:49:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:12.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:12.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.282 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:12.283 Found net devices under 0000:31:00.0: cvl_0_0 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:12.283 Found net devices under 0000:31:00.1: cvl_0_1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:35:12.283 00:35:12.283 --- 10.0.0.2 ping statistics --- 00:35:12.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.283 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:35:12.283 00:35:12.283 --- 10.0.0.1 ping statistics --- 00:35:12.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.283 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3814818 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3814818 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3814818 ']' 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.283 11:50:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.283 [2024-12-09 11:50:03.676452] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:12.283 [2024-12-09 11:50:03.677451] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:35:12.283 [2024-12-09 11:50:03.677490] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.283 [2024-12-09 11:50:03.755786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:12.283 [2024-12-09 11:50:03.790671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.283 [2024-12-09 11:50:03.790703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.283 [2024-12-09 11:50:03.790711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.283 [2024-12-09 11:50:03.790718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.283 [2024-12-09 11:50:03.790723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.283 [2024-12-09 11:50:03.791873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.283 [2024-12-09 11:50:03.791875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.283 [2024-12-09 11:50:03.847911] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:12.283 [2024-12-09 11:50:03.848586] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:12.283 [2024-12-09 11:50:03.848883] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:12.544 5000+0 records in 00:35:12.544 5000+0 records out 00:35:12.544 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184336 s, 556 MB/s 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 AIO0 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 [2024-12-09 11:50:04.592495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:12.544 [2024-12-09 11:50:04.632817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3814818 0 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 0 idle 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:12.544 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0' 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3814818 1 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 1 idle 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:12.806 11:50:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814823 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814823 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3815064 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3814818 0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3814818 0 busy 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:13.067 11:50:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814818 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814818 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3814818 1 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3814818 1 busy 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814823 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.30 reactor_1' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814823 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.30 reactor_1 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.451 11:50:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3815064 00:35:24.449 Initializing NVMe Controllers 00:35:24.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:24.449 Controller IO queue size 256, less than required. 00:35:24.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:24.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:24.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:24.449 Initialization complete. Launching workers. 00:35:24.449 ======================================================== 00:35:24.449 Latency(us) 00:35:24.449 Device Information : IOPS MiB/s Average min max 00:35:24.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19147.37 74.79 13376.65 4308.17 52151.13 00:35:24.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16758.50 65.46 15281.31 6798.33 19088.10 00:35:24.449 ======================================================== 00:35:24.449 Total : 35905.87 140.26 14265.62 4308.17 52151.13 00:35:24.449 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3814818 0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 0 idle 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:19.98 reactor_0' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814818 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:19.98 reactor_0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3814818 1 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 1 idle 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814823 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.74 reactor_1' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814823 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.74 reactor_1 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:24.449 11:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:24.449 11:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:24.449 11:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:24.449 11:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:24.449 11:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:24.449 11:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3814818 0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 0 idle 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814818 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.22 reactor_0' 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814818 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.22 reactor_0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3814818 1 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3814818 1 idle 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3814818 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3814818 -w 256 00:35:26.361 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3814823 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.88 reactor_1' 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3814823 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.88 reactor_1 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:26.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:26.622 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:26.622 rmmod nvme_tcp 00:35:26.622 rmmod nvme_fabrics 00:35:26.622 rmmod nvme_keyring 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3814818 ']' 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3814818 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3814818 ']' 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3814818 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3814818 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3814818' 00:35:26.883 killing process with pid 3814818 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3814818 00:35:26.883 11:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3814818 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.883 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:27.144 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.144 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.144 11:50:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.144 11:50:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:27.144 11:50:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.054 11:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.054 00:35:29.054 real 0m25.325s 00:35:29.054 user 0m40.125s 00:35:29.054 sys 0m9.663s 00:35:29.054 11:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.054 11:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:29.054 ************************************ 00:35:29.054 END TEST nvmf_interrupt 00:35:29.054 ************************************ 00:35:29.054 00:35:29.054 real 30m2.866s 00:35:29.054 user 61m27.485s 00:35:29.054 sys 10m7.409s 00:35:29.054 11:50:21 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.054 11:50:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.054 ************************************ 00:35:29.054 END TEST nvmf_tcp 00:35:29.054 ************************************ 00:35:29.054 11:50:21 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:29.054 11:50:21 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:29.054 11:50:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:29.054 11:50:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.054 11:50:21 -- common/autotest_common.sh@10 -- # set +x 00:35:29.315 ************************************ 00:35:29.315 START TEST spdkcli_nvmf_tcp 00:35:29.315 ************************************ 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:29.315 * Looking for test storage... 00:35:29.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:29.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.315 --rc genhtml_branch_coverage=1 00:35:29.315 --rc genhtml_function_coverage=1 00:35:29.315 --rc genhtml_legend=1 00:35:29.315 --rc geninfo_all_blocks=1 00:35:29.315 --rc geninfo_unexecuted_blocks=1 00:35:29.315 00:35:29.315 ' 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:29.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.315 --rc genhtml_branch_coverage=1 00:35:29.315 --rc genhtml_function_coverage=1 00:35:29.315 --rc genhtml_legend=1 00:35:29.315 --rc geninfo_all_blocks=1 00:35:29.315 --rc geninfo_unexecuted_blocks=1 00:35:29.315 00:35:29.315 ' 00:35:29.315 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:29.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.315 --rc genhtml_branch_coverage=1 00:35:29.315 --rc genhtml_function_coverage=1 00:35:29.315 --rc genhtml_legend=1 00:35:29.315 --rc geninfo_all_blocks=1 00:35:29.316 --rc geninfo_unexecuted_blocks=1 00:35:29.316 00:35:29.316 ' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.316 --rc genhtml_branch_coverage=1 00:35:29.316 --rc genhtml_function_coverage=1 00:35:29.316 --rc genhtml_legend=1 00:35:29.316 --rc geninfo_all_blocks=1 00:35:29.316 --rc geninfo_unexecuted_blocks=1 00:35:29.316 00:35:29.316 ' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3818370 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3818370 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3818370 ']' 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.316 11:50:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.576 [2024-12-09 11:50:21.527440] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:35:29.576 [2024-12-09 11:50:21.527508] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3818370 ] 00:35:29.576 [2024-12-09 11:50:21.604301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:29.576 [2024-12-09 11:50:21.647547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.576 [2024-12-09 11:50:21.647550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.517 11:50:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:30.517 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:30.517 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:30.517 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:30.517 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:30.517 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:30.517 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:30.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:30.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:30.517 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:30.517 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:30.517 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:30.517 ' 00:35:33.060 [2024-12-09 11:50:24.785109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.000 [2024-12-09 11:50:25.993117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:36.544 [2024-12-09 11:50:28.211776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:38.455 [2024-12-09 11:50:30.117555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:39.841 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:39.841 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:39.841 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:39.841 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:39.841 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:39.841 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:39.841 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:39.841 11:50:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.102 11:50:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:40.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:40.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:40.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:40.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:40.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:40.102 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:40.102 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:40.102 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:40.102 ' 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:45.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:45.395 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:45.395 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:45.395 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3818370 ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3818370' 00:35:45.395 killing process with pid 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3818370 ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3818370 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3818370 ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3818370 00:35:45.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3818370) - No such process 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3818370 is not found' 00:35:45.395 Process with pid 3818370 is not found 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:45.395 00:35:45.395 real 0m16.234s 00:35:45.395 user 0m33.590s 00:35:45.395 sys 0m0.722s 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.395 11:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.395 ************************************ 00:35:45.395 END TEST spdkcli_nvmf_tcp 00:35:45.395 ************************************ 00:35:45.395 11:50:37 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:45.395 11:50:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:45.395 11:50:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.395 11:50:37 -- common/autotest_common.sh@10 -- # set +x 00:35:45.395 ************************************ 00:35:45.395 START TEST nvmf_identify_passthru 00:35:45.395 ************************************ 00:35:45.395 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:45.657 * Looking for test storage... 00:35:45.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.657 11:50:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.657 --rc genhtml_branch_coverage=1 00:35:45.657 --rc genhtml_function_coverage=1 00:35:45.657 --rc genhtml_legend=1 00:35:45.657 --rc geninfo_all_blocks=1 00:35:45.657 --rc geninfo_unexecuted_blocks=1 00:35:45.657 00:35:45.657 ' 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.657 --rc genhtml_branch_coverage=1 00:35:45.657 --rc genhtml_function_coverage=1 00:35:45.657 --rc genhtml_legend=1 00:35:45.657 --rc geninfo_all_blocks=1 00:35:45.657 --rc geninfo_unexecuted_blocks=1 00:35:45.657 00:35:45.657 ' 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.657 --rc genhtml_branch_coverage=1 00:35:45.657 --rc genhtml_function_coverage=1 00:35:45.657 --rc genhtml_legend=1 00:35:45.657 --rc geninfo_all_blocks=1 00:35:45.657 --rc geninfo_unexecuted_blocks=1 00:35:45.657 00:35:45.657 ' 00:35:45.657 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:45.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.657 --rc genhtml_branch_coverage=1 00:35:45.657 --rc genhtml_function_coverage=1 00:35:45.657 --rc genhtml_legend=1 00:35:45.657 --rc geninfo_all_blocks=1 00:35:45.657 --rc geninfo_unexecuted_blocks=1 00:35:45.657 00:35:45.657 ' 00:35:45.657 11:50:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:45.657 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:45.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.658 11:50:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.658 11:50:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:45.658 11:50:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.658 11:50:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.658 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:45.658 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:45.658 11:50:37 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:45.658 11:50:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:53.802 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:53.803 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:53.803 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:53.803 Found net devices under 0000:31:00.0: cvl_0_0 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:53.803 Found net devices under 0000:31:00.1: cvl_0_1 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.803 11:50:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:35:53.803 00:35:53.803 --- 10.0.0.2 ping statistics --- 00:35:53.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.803 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:35:53.803 00:35:53.803 --- 10.0.0.1 ping statistics --- 00:35:53.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.803 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:53.803 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:53.804 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.804 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:53.804 11:50:45 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:53.804 11:50:45 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:53.804 11:50:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:54.375 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:54.375 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:54.376 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:54.376 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3825366 00:35:54.376 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:54.376 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:54.376 11:50:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3825366 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3825366 ']' 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.376 11:50:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:54.376 [2024-12-09 11:50:46.373041] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:35:54.376 [2024-12-09 11:50:46.373100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.376 [2024-12-09 11:50:46.453317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.376 [2024-12-09 11:50:46.492205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.376 [2024-12-09 11:50:46.492239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.376 [2024-12-09 11:50:46.492248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.376 [2024-12-09 11:50:46.492254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.376 [2024-12-09 11:50:46.492260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.376 [2024-12-09 11:50:46.493810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.376 [2024-12-09 11:50:46.493925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.376 [2024-12-09 11:50:46.494073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.376 [2024-12-09 11:50:46.494073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:55.318 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.318 INFO: Log level set to 20 00:35:55.318 INFO: Requests: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "method": "nvmf_set_config", 00:35:55.318 "id": 1, 00:35:55.318 "params": { 00:35:55.318 "admin_cmd_passthru": { 00:35:55.318 "identify_ctrlr": true 00:35:55.318 } 00:35:55.318 } 00:35:55.318 } 00:35:55.318 00:35:55.318 INFO: response: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "id": 1, 00:35:55.318 "result": true 00:35:55.318 } 00:35:55.318 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.318 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.318 INFO: Setting log level to 20 00:35:55.318 INFO: Setting log level to 20 00:35:55.318 INFO: Log level set to 20 00:35:55.318 INFO: Log level set to 20 00:35:55.318 INFO: Requests: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "method": "framework_start_init", 00:35:55.318 "id": 1 00:35:55.318 } 00:35:55.318 00:35:55.318 INFO: Requests: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "method": "framework_start_init", 00:35:55.318 "id": 1 00:35:55.318 } 00:35:55.318 00:35:55.318 [2024-12-09 11:50:47.247780] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:55.318 INFO: response: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "id": 1, 00:35:55.318 "result": true 00:35:55.318 } 00:35:55.318 00:35:55.318 INFO: response: 00:35:55.318 { 00:35:55.318 "jsonrpc": "2.0", 00:35:55.318 "id": 1, 00:35:55.318 "result": true 00:35:55.318 } 00:35:55.318 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.318 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.318 INFO: Setting log level to 40 00:35:55.318 INFO: Setting log level to 40 00:35:55.318 INFO: Setting log level to 40 00:35:55.318 [2024-12-09 11:50:47.261112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.318 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.318 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.318 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.579 Nvme0n1 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.579 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.579 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.579 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.579 [2024-12-09 11:50:47.652316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.579 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.579 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:55.579 [ 00:35:55.579 { 00:35:55.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:55.579 "subtype": "Discovery", 00:35:55.579 "listen_addresses": [], 00:35:55.579 "allow_any_host": true, 00:35:55.579 "hosts": [] 00:35:55.579 }, 00:35:55.579 { 00:35:55.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.579 "subtype": "NVMe", 00:35:55.579 "listen_addresses": [ 00:35:55.579 { 00:35:55.579 "trtype": "TCP", 00:35:55.579 "adrfam": "IPv4", 00:35:55.579 "traddr": "10.0.0.2", 00:35:55.579 "trsvcid": "4420" 00:35:55.579 } 00:35:55.579 ], 00:35:55.579 "allow_any_host": true, 00:35:55.579 "hosts": [], 00:35:55.579 "serial_number": "SPDK00000000000001", 00:35:55.579 "model_number": "SPDK bdev Controller", 00:35:55.580 "max_namespaces": 1, 00:35:55.580 "min_cntlid": 1, 00:35:55.580 "max_cntlid": 65519, 00:35:55.580 "namespaces": [ 00:35:55.580 { 00:35:55.580 "nsid": 1, 00:35:55.580 "bdev_name": "Nvme0n1", 00:35:55.580 "name": "Nvme0n1", 00:35:55.580 "nguid": "3634473052605494002538450000002D", 00:35:55.580 "uuid": "36344730-5260-5494-0025-38450000002d" 00:35:55.580 } 00:35:55.580 ] 00:35:55.580 } 00:35:55.580 ] 00:35:55.580 11:50:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.580 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:55.580 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:55.580 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:55.841 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:35:55.841 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:55.841 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:55.841 11:50:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:56.102 11:50:48 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.102 rmmod nvme_tcp 00:35:56.102 rmmod nvme_fabrics 00:35:56.102 rmmod nvme_keyring 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3825366 ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3825366 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3825366 ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3825366 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.102 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3825366 00:35:56.362 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.362 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.362 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3825366' 00:35:56.362 killing process with pid 3825366 00:35:56.362 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3825366 00:35:56.362 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3825366 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.362 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.623 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.623 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:56.623 11:50:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.623 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:56.623 11:50:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.533 11:50:50 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:58.533 00:35:58.533 real 0m13.067s 00:35:58.533 user 0m10.337s 00:35:58.533 sys 0m6.613s 00:35:58.533 11:50:50 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.533 11:50:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.533 ************************************ 00:35:58.533 END TEST nvmf_identify_passthru 00:35:58.533 ************************************ 00:35:58.533 11:50:50 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:58.533 11:50:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:58.533 11:50:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:58.533 11:50:50 -- common/autotest_common.sh@10 -- # set +x 00:35:58.533 ************************************ 00:35:58.533 START TEST nvmf_dif 00:35:58.533 ************************************ 00:35:58.533 11:50:50 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:58.794 * Looking for test storage... 00:35:58.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.794 --rc genhtml_branch_coverage=1 00:35:58.794 --rc genhtml_function_coverage=1 00:35:58.794 --rc genhtml_legend=1 00:35:58.794 --rc geninfo_all_blocks=1 00:35:58.794 --rc geninfo_unexecuted_blocks=1 00:35:58.794 00:35:58.794 ' 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.794 --rc genhtml_branch_coverage=1 00:35:58.794 --rc genhtml_function_coverage=1 00:35:58.794 --rc genhtml_legend=1 00:35:58.794 --rc geninfo_all_blocks=1 00:35:58.794 --rc geninfo_unexecuted_blocks=1 00:35:58.794 00:35:58.794 ' 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.794 --rc genhtml_branch_coverage=1 00:35:58.794 --rc genhtml_function_coverage=1 00:35:58.794 --rc genhtml_legend=1 00:35:58.794 --rc geninfo_all_blocks=1 00:35:58.794 --rc geninfo_unexecuted_blocks=1 00:35:58.794 00:35:58.794 ' 00:35:58.794 11:50:50 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:58.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.794 --rc genhtml_branch_coverage=1 00:35:58.794 --rc genhtml_function_coverage=1 00:35:58.794 --rc genhtml_legend=1 00:35:58.794 --rc geninfo_all_blocks=1 00:35:58.794 --rc geninfo_unexecuted_blocks=1 00:35:58.794 00:35:58.794 ' 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.794 11:50:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.794 11:50:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.794 11:50:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.794 11:50:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.794 11:50:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:58.794 11:50:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:58.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:58.794 11:50:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:58.794 11:50:50 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.795 11:50:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:58.795 11:50:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.795 11:50:50 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:58.795 11:50:50 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:58.795 11:50:50 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:58.795 11:50:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:06.934 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:06.934 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:06.934 Found net devices under 0000:31:00.0: cvl_0_0 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:06.934 Found net devices under 0000:31:00.1: cvl_0_1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:36:06.934 00:36:06.934 --- 10.0.0.2 ping statistics --- 00:36:06.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.934 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:36:06.934 00:36:06.934 --- 10.0.0.1 ping statistics --- 00:36:06.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.934 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:06.934 11:50:57 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:09.479 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:09.479 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:09.479 11:51:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:09.739 11:51:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:09.739 11:51:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:09.739 11:51:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:09.739 11:51:01 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3831546 00:36:09.739 11:51:01 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3831546 00:36:09.739 11:51:01 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3831546 ']' 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.739 11:51:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:09.739 [2024-12-09 11:51:01.724038] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:36:09.739 [2024-12-09 11:51:01.724093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.739 [2024-12-09 11:51:01.805793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.739 [2024-12-09 11:51:01.843212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.739 [2024-12-09 11:51:01.843246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.739 [2024-12-09 11:51:01.843254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.739 [2024-12-09 11:51:01.843261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.739 [2024-12-09 11:51:01.843267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.739 [2024-12-09 11:51:01.843830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:10.681 11:51:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 11:51:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.681 11:51:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:10.681 11:51:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 [2024-12-09 11:51:02.550138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.681 11:51:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 ************************************ 00:36:10.681 START TEST fio_dif_1_default 00:36:10.681 ************************************ 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 bdev_null0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:10.681 [2024-12-09 11:51:02.634482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:10.681 { 00:36:10.681 "params": { 00:36:10.681 "name": "Nvme$subsystem", 00:36:10.681 "trtype": "$TEST_TRANSPORT", 00:36:10.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.681 "adrfam": "ipv4", 00:36:10.681 "trsvcid": "$NVMF_PORT", 00:36:10.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.681 "hdgst": ${hdgst:-false}, 00:36:10.681 "ddgst": ${ddgst:-false} 00:36:10.681 }, 00:36:10.681 "method": "bdev_nvme_attach_controller" 00:36:10.681 } 00:36:10.681 EOF 00:36:10.681 )") 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:10.681 "params": { 00:36:10.681 "name": "Nvme0", 00:36:10.681 "trtype": "tcp", 00:36:10.681 "traddr": "10.0.0.2", 00:36:10.681 "adrfam": "ipv4", 00:36:10.681 "trsvcid": "4420", 00:36:10.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.681 "hdgst": false, 00:36:10.681 "ddgst": false 00:36:10.681 }, 00:36:10.681 "method": "bdev_nvme_attach_controller" 00:36:10.681 }' 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:10.681 11:51:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.941 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:10.941 fio-3.35 00:36:10.941 Starting 1 thread 00:36:23.169 00:36:23.169 filename0: (groupid=0, jobs=1): err= 0: pid=3832099: Mon Dec 9 11:51:13 2024 00:36:23.169 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:36:23.169 slat (nsec): min=5479, max=32464, avg=6363.71, stdev=1609.74 00:36:23.169 clat (usec): min=40777, max=42052, avg=40999.48, stdev=134.92 00:36:23.169 lat (usec): min=40782, max=42084, avg=41005.85, stdev=135.33 00:36:23.169 clat percentiles (usec): 00:36:23.169 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:23.169 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:23.169 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:23.169 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:23.169 | 99.99th=[42206] 00:36:23.169 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:36:23.169 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:23.169 lat (msec) : 50=100.00% 00:36:23.169 cpu : usr=93.26%, sys=6.52%, ctx=17, majf=0, minf=223 00:36:23.169 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.169 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.169 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:23.169 00:36:23.169 Run status group 0 (all jobs): 00:36:23.169 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 00:36:23.169 real 0m11.096s 00:36:23.169 user 0m24.653s 00:36:23.169 sys 0m0.955s 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 ************************************ 00:36:23.169 END TEST fio_dif_1_default 00:36:23.169 ************************************ 00:36:23.169 11:51:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:23.169 11:51:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:23.169 11:51:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 ************************************ 00:36:23.169 START TEST fio_dif_1_multi_subsystems 00:36:23.169 ************************************ 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 bdev_null0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 [2024-12-09 11:51:13.814595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 bdev_null1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:23.169 { 00:36:23.169 "params": { 00:36:23.169 "name": "Nvme$subsystem", 00:36:23.169 "trtype": "$TEST_TRANSPORT", 00:36:23.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.169 "adrfam": "ipv4", 00:36:23.169 "trsvcid": "$NVMF_PORT", 00:36:23.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.169 "hdgst": ${hdgst:-false}, 00:36:23.169 "ddgst": ${ddgst:-false} 00:36:23.169 }, 00:36:23.169 "method": "bdev_nvme_attach_controller" 00:36:23.169 } 00:36:23.169 EOF 00:36:23.169 )") 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:23.169 { 00:36:23.169 "params": { 00:36:23.169 "name": "Nvme$subsystem", 00:36:23.169 "trtype": "$TEST_TRANSPORT", 00:36:23.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.169 "adrfam": "ipv4", 00:36:23.169 "trsvcid": "$NVMF_PORT", 00:36:23.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.169 "hdgst": ${hdgst:-false}, 00:36:23.169 "ddgst": ${ddgst:-false} 00:36:23.169 }, 00:36:23.169 "method": "bdev_nvme_attach_controller" 00:36:23.169 } 00:36:23.169 EOF 00:36:23.169 )") 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:23.169 "params": { 00:36:23.169 "name": "Nvme0", 00:36:23.169 "trtype": "tcp", 00:36:23.169 "traddr": "10.0.0.2", 00:36:23.169 "adrfam": "ipv4", 00:36:23.169 "trsvcid": "4420", 00:36:23.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:23.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:23.169 "hdgst": false, 00:36:23.169 "ddgst": false 00:36:23.169 }, 00:36:23.169 "method": "bdev_nvme_attach_controller" 00:36:23.169 },{ 00:36:23.169 "params": { 00:36:23.169 "name": "Nvme1", 00:36:23.169 "trtype": "tcp", 00:36:23.169 "traddr": "10.0.0.2", 00:36:23.169 "adrfam": "ipv4", 00:36:23.169 "trsvcid": "4420", 00:36:23.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:23.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:23.169 "hdgst": false, 00:36:23.169 "ddgst": false 00:36:23.169 }, 00:36:23.169 "method": "bdev_nvme_attach_controller" 00:36:23.169 }' 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:23.169 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:23.170 11:51:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.170 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:23.170 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:23.170 fio-3.35 00:36:23.170 Starting 2 threads 00:36:33.535 00:36:33.535 filename0: (groupid=0, jobs=1): err= 0: pid=3834913: Mon Dec 9 11:51:25 2024 00:36:33.535 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10006msec) 00:36:33.535 slat (nsec): min=5479, max=32990, avg=6472.16, stdev=1909.17 00:36:33.535 clat (usec): min=40844, max=43454, avg=41157.20, stdev=411.55 00:36:33.535 lat (usec): min=40852, max=43487, avg=41163.67, stdev=412.08 00:36:33.535 clat percentiles (usec): 00:36:33.535 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:33.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:33.535 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:36:33.535 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:36:33.535 | 99.99th=[43254] 00:36:33.535 bw ( KiB/s): min= 384, max= 416, per=50.00%, avg=387.20, stdev= 9.85, samples=20 00:36:33.535 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:36:33.535 lat (msec) : 50=100.00% 00:36:33.535 cpu : usr=95.37%, sys=4.42%, ctx=18, majf=0, minf=182 00:36:33.535 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.535 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.535 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:33.535 filename1: (groupid=0, jobs=1): err= 0: pid=3834914: Mon Dec 9 11:51:25 2024 00:36:33.535 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10026msec) 00:36:33.535 slat (nsec): min=5468, max=32001, avg=6373.48, stdev=1741.08 00:36:33.535 clat (usec): min=40899, max=43018, avg=41410.62, stdev=594.76 00:36:33.535 lat (usec): min=40904, max=43024, avg=41417.00, stdev=594.79 00:36:33.535 clat percentiles (usec): 00:36:33.535 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:33.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:33.535 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:36:33.535 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:33.535 | 99.99th=[43254] 00:36:33.535 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=385.60, stdev= 7.16, samples=20 00:36:33.535 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:36:33.535 lat (msec) : 50=100.00% 00:36:33.535 cpu : usr=95.08%, sys=4.71%, ctx=13, majf=0, minf=85 00:36:33.535 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.535 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.535 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:33.535 00:36:33.535 Run status group 0 (all jobs): 00:36:33.535 READ: bw=774KiB/s (793kB/s), 386KiB/s-389KiB/s (395kB/s-398kB/s), io=7760KiB (7946kB), run=10006-10026msec 00:36:33.535 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:33.535 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 00:36:33.536 real 0m11.450s 00:36:33.536 user 0m31.857s 00:36:33.536 sys 0m1.270s 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 ************************************ 00:36:33.536 END TEST fio_dif_1_multi_subsystems 00:36:33.536 ************************************ 00:36:33.536 11:51:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:33.536 11:51:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:33.536 11:51:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 ************************************ 00:36:33.536 START TEST fio_dif_rand_params 00:36:33.536 ************************************ 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 bdev_null0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.536 [2024-12-09 11:51:25.347505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:33.536 { 00:36:33.536 "params": { 00:36:33.536 "name": "Nvme$subsystem", 00:36:33.536 "trtype": "$TEST_TRANSPORT", 00:36:33.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.536 "adrfam": "ipv4", 00:36:33.536 "trsvcid": "$NVMF_PORT", 00:36:33.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.536 "hdgst": ${hdgst:-false}, 00:36:33.536 "ddgst": ${ddgst:-false} 00:36:33.536 }, 00:36:33.536 "method": "bdev_nvme_attach_controller" 00:36:33.536 } 00:36:33.536 EOF 00:36:33.536 )") 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:33.536 "params": { 00:36:33.536 "name": "Nvme0", 00:36:33.536 "trtype": "tcp", 00:36:33.536 "traddr": "10.0.0.2", 00:36:33.536 "adrfam": "ipv4", 00:36:33.536 "trsvcid": "4420", 00:36:33.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.536 "hdgst": false, 00:36:33.536 "ddgst": false 00:36:33.536 }, 00:36:33.536 "method": "bdev_nvme_attach_controller" 00:36:33.536 }' 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:33.536 11:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.796 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:33.796 ... 00:36:33.796 fio-3.35 00:36:33.796 Starting 3 threads 00:36:40.358 00:36:40.358 filename0: (groupid=0, jobs=1): err= 0: pid=3837272: Mon Dec 9 11:51:31 2024 00:36:40.358 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(141MiB/5047msec) 00:36:40.358 slat (nsec): min=5677, max=35488, avg=8353.31, stdev=2205.83 00:36:40.358 clat (usec): min=6828, max=91906, avg=13396.92, stdev=7689.14 00:36:40.358 lat (usec): min=6837, max=91917, avg=13405.27, stdev=7689.31 00:36:40.358 clat percentiles (usec): 00:36:40.358 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:36:40.358 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:36:40.358 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14484], 95.00th=[15401], 00:36:40.358 | 99.00th=[51643], 99.50th=[52167], 99.90th=[91751], 99.95th=[91751], 00:36:40.358 | 99.99th=[91751] 00:36:40.358 bw ( KiB/s): min=21504, max=32000, per=31.93%, avg=28774.40, stdev=2919.35, samples=10 00:36:40.358 iops : min= 168, max= 250, avg=224.80, stdev=22.81, samples=10 00:36:40.358 lat (msec) : 10=10.39%, 20=86.86%, 50=0.62%, 100=2.13% 00:36:40.358 cpu : usr=95.20%, sys=4.56%, ctx=6, majf=0, minf=63 00:36:40.358 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.358 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.358 filename0: (groupid=0, jobs=1): err= 0: pid=3837273: Mon Dec 9 11:51:31 2024 00:36:40.358 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(148MiB/5007msec) 00:36:40.358 slat (nsec): min=5669, max=49405, avg=8581.85, stdev=2682.93 00:36:40.358 clat (usec): min=6235, max=53091, avg=12650.62, stdev=4387.35 00:36:40.358 lat (usec): min=6247, max=53100, avg=12659.21, stdev=4387.31 00:36:40.358 clat percentiles (usec): 00:36:40.358 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:36:40.358 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:36:40.358 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15008], 95.00th=[15664], 00:36:40.358 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52167], 99.95th=[53216], 00:36:40.358 | 99.99th=[53216] 00:36:40.358 bw ( KiB/s): min=26112, max=33536, per=33.64%, avg=30310.40, stdev=2080.45, samples=10 00:36:40.358 iops : min= 204, max= 262, avg=236.80, stdev=16.25, samples=10 00:36:40.358 lat (msec) : 10=13.41%, 20=85.58%, 50=0.08%, 100=0.93% 00:36:40.358 cpu : usr=95.53%, sys=4.23%, ctx=12, majf=0, minf=174 00:36:40.358 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.358 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.358 filename0: (groupid=0, jobs=1): err= 0: pid=3837274: Mon Dec 9 11:51:31 2024 00:36:40.358 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(155MiB/5045msec) 00:36:40.358 slat (nsec): min=8049, max=61233, avg=9455.31, stdev=2949.71 00:36:40.358 clat (usec): min=7285, max=54462, avg=12149.23, stdev=4098.68 00:36:40.358 lat (usec): min=7294, max=54470, avg=12158.68, stdev=4098.80 00:36:40.358 clat percentiles (usec): 00:36:40.358 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:36:40.358 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:36:40.358 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14091], 95.00th=[14877], 00:36:40.358 | 99.00th=[17171], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:36:40.358 | 99.99th=[54264] 00:36:40.358 bw ( KiB/s): min=29184, max=34560, per=35.20%, avg=31718.40, stdev=1674.14, samples=10 00:36:40.358 iops : min= 228, max= 270, avg=247.80, stdev=13.08, samples=10 00:36:40.358 lat (msec) : 10=16.76%, 20=82.35%, 50=0.32%, 100=0.56% 00:36:40.358 cpu : usr=95.20%, sys=4.52%, ctx=17, majf=0, minf=62 00:36:40.358 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.358 issued rwts: total=1241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.358 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:40.358 00:36:40.358 Run status group 0 (all jobs): 00:36:40.358 READ: bw=88.0MiB/s (92.3MB/s), 27.9MiB/s-30.7MiB/s (29.2MB/s-32.2MB/s), io=444MiB (466MB), run=5007-5047msec 00:36:40.358 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:40.358 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:40.358 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.358 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:40.358 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 bdev_null0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 [2024-12-09 11:51:31.598210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 bdev_null1 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 bdev_null2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.359 { 00:36:40.359 "params": { 00:36:40.359 "name": "Nvme$subsystem", 00:36:40.359 "trtype": "$TEST_TRANSPORT", 00:36:40.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.359 "adrfam": "ipv4", 00:36:40.359 "trsvcid": "$NVMF_PORT", 00:36:40.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.359 "hdgst": ${hdgst:-false}, 00:36:40.359 "ddgst": ${ddgst:-false} 00:36:40.359 }, 00:36:40.359 "method": "bdev_nvme_attach_controller" 00:36:40.359 } 00:36:40.359 EOF 00:36:40.359 )") 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.359 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.359 { 00:36:40.359 "params": { 00:36:40.359 "name": "Nvme$subsystem", 00:36:40.359 "trtype": "$TEST_TRANSPORT", 00:36:40.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.359 "adrfam": "ipv4", 00:36:40.360 "trsvcid": "$NVMF_PORT", 00:36:40.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.360 "hdgst": ${hdgst:-false}, 00:36:40.360 "ddgst": ${ddgst:-false} 00:36:40.360 }, 00:36:40.360 "method": "bdev_nvme_attach_controller" 00:36:40.360 } 00:36:40.360 EOF 00:36:40.360 )") 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.360 { 00:36:40.360 "params": { 00:36:40.360 "name": "Nvme$subsystem", 00:36:40.360 "trtype": "$TEST_TRANSPORT", 00:36:40.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.360 "adrfam": "ipv4", 00:36:40.360 "trsvcid": "$NVMF_PORT", 00:36:40.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.360 "hdgst": ${hdgst:-false}, 00:36:40.360 "ddgst": ${ddgst:-false} 00:36:40.360 }, 00:36:40.360 "method": "bdev_nvme_attach_controller" 00:36:40.360 } 00:36:40.360 EOF 00:36:40.360 )") 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.360 "params": { 00:36:40.360 "name": "Nvme0", 00:36:40.360 "trtype": "tcp", 00:36:40.360 "traddr": "10.0.0.2", 00:36:40.360 "adrfam": "ipv4", 00:36:40.360 "trsvcid": "4420", 00:36:40.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.360 "hdgst": false, 00:36:40.360 "ddgst": false 00:36:40.360 }, 00:36:40.360 "method": "bdev_nvme_attach_controller" 00:36:40.360 },{ 00:36:40.360 "params": { 00:36:40.360 "name": "Nvme1", 00:36:40.360 "trtype": "tcp", 00:36:40.360 "traddr": "10.0.0.2", 00:36:40.360 "adrfam": "ipv4", 00:36:40.360 "trsvcid": "4420", 00:36:40.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.360 "hdgst": false, 00:36:40.360 "ddgst": false 00:36:40.360 }, 00:36:40.360 "method": "bdev_nvme_attach_controller" 00:36:40.360 },{ 00:36:40.360 "params": { 00:36:40.360 "name": "Nvme2", 00:36:40.360 "trtype": "tcp", 00:36:40.360 "traddr": "10.0.0.2", 00:36:40.360 "adrfam": "ipv4", 00:36:40.360 "trsvcid": "4420", 00:36:40.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:40.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:40.360 "hdgst": false, 00:36:40.360 "ddgst": false 00:36:40.360 }, 00:36:40.360 "method": "bdev_nvme_attach_controller" 00:36:40.360 }' 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.360 11:51:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.360 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:40.360 ... 00:36:40.360 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:40.360 ... 00:36:40.360 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:40.360 ... 00:36:40.360 fio-3.35 00:36:40.360 Starting 24 threads 00:36:52.554 00:36:52.554 filename0: (groupid=0, jobs=1): err= 0: pid=3838699: Mon Dec 9 11:51:43 2024 00:36:52.554 read: IOPS=509, BW=2038KiB/s (2087kB/s)(19.9MiB/10016msec) 00:36:52.554 slat (nsec): min=5658, max=66378, avg=6965.76, stdev=2890.24 00:36:52.554 clat (usec): min=1714, max=34454, avg=31335.41, stdev=5464.46 00:36:52.554 lat (usec): min=1727, max=34461, avg=31342.37, stdev=5462.88 00:36:52.554 clat percentiles (usec): 00:36:52.554 | 1.00th=[ 1844], 5.00th=[21103], 10.00th=[32113], 20.00th=[32637], 00:36:52.554 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.554 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.554 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:36:52.554 | 99.99th=[34341] 00:36:52.554 bw ( KiB/s): min= 1916, max= 3200, per=4.32%, avg=2033.65, stdev=284.60, samples=20 00:36:52.554 iops : min= 479, max= 800, avg=508.30, stdev=71.13, samples=20 00:36:52.554 lat (msec) : 2=1.51%, 4=0.67%, 10=0.16%, 20=2.06%, 50=95.61% 00:36:52.554 cpu : usr=98.61%, sys=0.97%, ctx=50, majf=0, minf=61 00:36:52.554 IO depths : 1=6.0%, 2=12.1%, 4=24.2%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:52.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.554 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.554 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.554 filename0: (groupid=0, jobs=1): err= 0: pid=3838700: Mon Dec 9 11:51:43 2024 00:36:52.554 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:36:52.554 slat (nsec): min=5690, max=60968, avg=15916.96, stdev=10354.03 00:36:52.554 clat (usec): min=28856, max=34380, avg=32650.69, stdev=531.99 00:36:52.554 lat (usec): min=28862, max=34402, avg=32666.61, stdev=531.30 00:36:52.554 clat percentiles (usec): 00:36:52.554 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.554 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.554 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.554 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:36:52.554 | 99.99th=[34341] 00:36:52.554 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1946.47, stdev=53.22, samples=19 00:36:52.554 iops : min= 479, max= 512, avg=486.58, stdev=13.23, samples=19 00:36:52.554 lat (msec) : 50=100.00% 00:36:52.554 cpu : usr=99.04%, sys=0.68%, ctx=29, majf=0, minf=27 00:36:52.554 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:52.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.554 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.554 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838701: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:36:52.555 slat (nsec): min=5686, max=75750, avg=18851.79, stdev=11833.40 00:36:52.555 clat (usec): min=11611, max=43481, avg=32459.18, stdev=1823.20 00:36:52.555 lat (usec): min=11634, max=43502, avg=32478.03, stdev=1823.80 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[21365], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:36:52.555 | 99.99th=[43254] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=1959.58, stdev=61.74, samples=19 00:36:52.555 iops : min= 479, max= 512, avg=489.89, stdev=15.43, samples=19 00:36:52.555 lat (msec) : 20=0.65%, 50=99.35% 00:36:52.555 cpu : usr=99.04%, sys=0.70%, ctx=12, majf=0, minf=26 00:36:52.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838702: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=509, BW=2038KiB/s (2086kB/s)(19.9MiB/10020msec) 00:36:52.555 slat (nsec): min=5663, max=90815, avg=9610.51, stdev=7379.14 00:36:52.555 clat (usec): min=10633, max=34738, avg=31326.99, stdev=3937.60 00:36:52.555 lat (usec): min=10647, max=34748, avg=31336.60, stdev=3937.25 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[15139], 5.00th=[21627], 10.00th=[22938], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:52.555 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:36:52.555 | 99.99th=[34866] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2432, per=4.32%, avg=2034.15, stdev=143.92, samples=20 00:36:52.555 iops : min= 479, max= 608, avg=508.50, stdev=35.98, samples=20 00:36:52.555 lat (msec) : 20=1.88%, 50=98.12% 00:36:52.555 cpu : usr=98.46%, sys=1.03%, ctx=199, majf=0, minf=27 00:36:52.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838703: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:36:52.555 slat (nsec): min=5329, max=88378, avg=18954.23, stdev=12777.09 00:36:52.555 clat (usec): min=16993, max=61400, avg=32526.14, stdev=2567.13 00:36:52.555 lat (usec): min=17002, max=61414, avg=32545.10, stdev=2567.05 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[38011], 99.50th=[43254], 99.90th=[61604], 99.95th=[61604], 00:36:52.555 | 99.99th=[61604] 00:36:52.555 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1951.89, stdev=69.24, samples=19 00:36:52.555 iops : min= 448, max= 512, avg=487.89, stdev=17.22, samples=19 00:36:52.555 lat (msec) : 20=0.49%, 50=99.18%, 100=0.33% 00:36:52.555 cpu : usr=98.97%, sys=0.75%, ctx=13, majf=0, minf=26 00:36:52.555 IO depths : 1=5.8%, 2=11.6%, 4=23.4%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838705: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:36:52.555 slat (nsec): min=5356, max=81086, avg=21327.85, stdev=14172.93 00:36:52.555 clat (usec): min=19318, max=60439, avg=32600.82, stdev=1536.05 00:36:52.555 lat (usec): min=19325, max=60453, avg=32622.15, stdev=1535.86 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:36:52.555 | 99.99th=[60556] 00:36:52.555 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.05, stdev=68.39, samples=19 00:36:52.555 iops : min= 448, max= 512, avg=486.47, stdev=17.04, samples=19 00:36:52.555 lat (msec) : 20=0.33%, 50=99.47%, 100=0.20% 00:36:52.555 cpu : usr=97.99%, sys=1.27%, ctx=548, majf=0, minf=24 00:36:52.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838706: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10014msec) 00:36:52.555 slat (nsec): min=5686, max=79489, avg=13022.36, stdev=10034.11 00:36:52.555 clat (usec): min=19063, max=40392, avg=32617.40, stdev=1073.81 00:36:52.555 lat (usec): min=19069, max=40408, avg=32630.42, stdev=1073.51 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.555 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:36:52.555 | 99.99th=[40633] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1952.95, stdev=57.16, samples=19 00:36:52.555 iops : min= 479, max= 512, avg=488.16, stdev=14.16, samples=19 00:36:52.555 lat (msec) : 20=0.33%, 50=99.67% 00:36:52.555 cpu : usr=98.75%, sys=0.92%, ctx=99, majf=0, minf=34 00:36:52.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename0: (groupid=0, jobs=1): err= 0: pid=3838707: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10020msec) 00:36:52.555 slat (nsec): min=5730, max=72057, avg=17863.31, stdev=10594.75 00:36:52.555 clat (usec): min=9117, max=42891, avg=32409.87, stdev=2473.87 00:36:52.555 lat (usec): min=9156, max=42898, avg=32427.73, stdev=2473.41 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[14746], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[35390], 99.90th=[42730], 99.95th=[42730], 00:36:52.555 | 99.99th=[42730] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2272, per=4.17%, avg=1962.15, stdev=89.65, samples=20 00:36:52.555 iops : min= 479, max= 568, avg=490.50, stdev=22.38, samples=20 00:36:52.555 lat (msec) : 10=0.12%, 20=1.34%, 50=98.54% 00:36:52.555 cpu : usr=98.82%, sys=0.84%, ctx=43, majf=0, minf=31 00:36:52.555 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename1: (groupid=0, jobs=1): err= 0: pid=3838708: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:36:52.555 slat (nsec): min=5651, max=76531, avg=15019.68, stdev=11167.60 00:36:52.555 clat (usec): min=11579, max=34483, avg=32505.52, stdev=1803.66 00:36:52.555 lat (usec): min=11589, max=34490, avg=32520.54, stdev=1803.34 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[26084], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:36:52.555 | 99.99th=[34341] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=1959.58, stdev=61.74, samples=19 00:36:52.555 iops : min= 479, max= 512, avg=489.89, stdev=15.43, samples=19 00:36:52.555 lat (msec) : 20=0.65%, 50=99.35% 00:36:52.555 cpu : usr=98.82%, sys=0.88%, ctx=13, majf=0, minf=26 00:36:52.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.555 filename1: (groupid=0, jobs=1): err= 0: pid=3838709: Mon Dec 9 11:51:43 2024 00:36:52.555 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:36:52.555 slat (nsec): min=5653, max=80788, avg=22182.46, stdev=13659.07 00:36:52.555 clat (usec): min=21210, max=51053, avg=32595.92, stdev=1195.22 00:36:52.555 lat (usec): min=21219, max=51069, avg=32618.10, stdev=1194.79 00:36:52.555 clat percentiles (usec): 00:36:52.555 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.555 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.555 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.555 | 99.00th=[34341], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:36:52.555 | 99.99th=[51119] 00:36:52.555 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1946.42, stdev=53.26, samples=19 00:36:52.555 iops : min= 479, max= 512, avg=486.53, stdev=13.26, samples=19 00:36:52.555 lat (msec) : 50=99.96%, 100=0.04% 00:36:52.555 cpu : usr=98.72%, sys=0.92%, ctx=101, majf=0, minf=27 00:36:52.555 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:52.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.555 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838710: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10002msec) 00:36:52.556 slat (nsec): min=5576, max=76394, avg=16320.13, stdev=11751.29 00:36:52.556 clat (usec): min=10000, max=39837, avg=32231.56, stdev=2788.37 00:36:52.556 lat (usec): min=10011, max=39845, avg=32247.88, stdev=2787.56 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[15008], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[39584], 00:36:52.556 | 99.99th=[39584] 00:36:52.556 bw ( KiB/s): min= 1916, max= 2304, per=4.20%, avg=1979.47, stdev=98.91, samples=19 00:36:52.556 iops : min= 479, max= 576, avg=494.79, stdev=24.67, samples=19 00:36:52.556 lat (msec) : 20=1.90%, 50=98.10% 00:36:52.556 cpu : usr=99.01%, sys=0.69%, ctx=18, majf=0, minf=48 00:36:52.556 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838711: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10004msec) 00:36:52.556 slat (nsec): min=5153, max=83300, avg=15192.39, stdev=12484.31 00:36:52.556 clat (usec): min=14385, max=74060, avg=32674.22, stdev=3427.47 00:36:52.556 lat (usec): min=14391, max=74076, avg=32689.41, stdev=3427.97 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[19268], 5.00th=[27919], 10.00th=[32375], 20.00th=[32375], 00:36:52.556 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:36:52.556 | 99.00th=[44303], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:36:52.556 | 99.99th=[73925] 00:36:52.556 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1949.47, stdev=63.81, samples=19 00:36:52.556 iops : min= 448, max= 512, avg=487.37, stdev=15.95, samples=19 00:36:52.556 lat (msec) : 20=1.02%, 50=98.36%, 100=0.61% 00:36:52.556 cpu : usr=99.06%, sys=0.66%, ctx=11, majf=0, minf=35 00:36:52.556 IO depths : 1=0.6%, 2=1.3%, 4=3.3%, 8=77.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=89.8%, 8=9.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838712: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10020msec) 00:36:52.556 slat (nsec): min=5692, max=82671, avg=21962.44, stdev=15315.52 00:36:52.556 clat (usec): min=9541, max=40501, avg=32099.49, stdev=2857.27 00:36:52.556 lat (usec): min=9555, max=40511, avg=32121.45, stdev=2858.34 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[15008], 5.00th=[31327], 10.00th=[32113], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[40633], 00:36:52.556 | 99.99th=[40633] 00:36:52.556 bw ( KiB/s): min= 1916, max= 2480, per=4.20%, avg=1979.00, stdev=130.98, samples=20 00:36:52.556 iops : min= 479, max= 620, avg=494.75, stdev=32.75, samples=20 00:36:52.556 lat (msec) : 10=0.04%, 20=1.81%, 50=98.15% 00:36:52.556 cpu : usr=98.56%, sys=0.94%, ctx=33, majf=0, minf=29 00:36:52.556 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838713: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10009msec) 00:36:52.556 slat (nsec): min=4927, max=66472, avg=14697.43, stdev=10509.39 00:36:52.556 clat (usec): min=19217, max=48881, avg=32681.28, stdev=1387.27 00:36:52.556 lat (usec): min=19223, max=48895, avg=32695.98, stdev=1386.86 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[33817], 99.50th=[34341], 99.90th=[49021], 99.95th=[49021], 00:36:52.556 | 99.99th=[49021] 00:36:52.556 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.53, stdev=68.70, samples=19 00:36:52.556 iops : min= 448, max= 512, avg=486.63, stdev=17.18, samples=19 00:36:52.556 lat (msec) : 20=0.29%, 50=99.71% 00:36:52.556 cpu : usr=99.03%, sys=0.67%, ctx=31, majf=0, minf=39 00:36:52.556 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838715: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10012msec) 00:36:52.556 slat (nsec): min=5671, max=84902, avg=17168.52, stdev=13604.14 00:36:52.556 clat (usec): min=10707, max=35187, avg=32360.21, stdev=2496.04 00:36:52.556 lat (usec): min=10731, max=35195, avg=32377.38, stdev=2495.18 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[14877], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:36:52.556 | 99.99th=[35390] 00:36:52.556 bw ( KiB/s): min= 1916, max= 2304, per=4.17%, avg=1966.05, stdev=97.61, samples=19 00:36:52.556 iops : min= 479, max= 576, avg=491.47, stdev=24.37, samples=19 00:36:52.556 lat (msec) : 20=1.62%, 50=98.38% 00:36:52.556 cpu : usr=98.55%, sys=0.99%, ctx=71, majf=0, minf=36 00:36:52.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename1: (groupid=0, jobs=1): err= 0: pid=3838716: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10005msec) 00:36:52.556 slat (nsec): min=5697, max=80819, avg=20166.51, stdev=13848.44 00:36:52.556 clat (usec): min=17161, max=50867, avg=32620.34, stdev=1525.67 00:36:52.556 lat (usec): min=17167, max=50888, avg=32640.50, stdev=1525.70 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[34866], 99.50th=[34866], 99.90th=[50594], 99.95th=[50594], 00:36:52.556 | 99.99th=[51119] 00:36:52.556 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.00, stdev=67.90, samples=19 00:36:52.556 iops : min= 448, max= 512, avg=486.42, stdev=16.86, samples=19 00:36:52.556 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:52.556 cpu : usr=98.92%, sys=0.78%, ctx=12, majf=0, minf=31 00:36:52.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename2: (groupid=0, jobs=1): err= 0: pid=3838717: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:36:52.556 slat (nsec): min=5690, max=61532, avg=18038.51, stdev=11534.41 00:36:52.556 clat (usec): min=16855, max=75230, avg=32743.80, stdev=2772.53 00:36:52.556 lat (usec): min=16864, max=75249, avg=32761.84, stdev=2772.21 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.556 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.556 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.556 | 99.00th=[34866], 99.50th=[44303], 99.90th=[74974], 99.95th=[74974], 00:36:52.556 | 99.99th=[74974] 00:36:52.556 bw ( KiB/s): min= 1667, max= 2048, per=4.12%, avg=1939.42, stdev=96.91, samples=19 00:36:52.556 iops : min= 416, max= 512, avg=484.74, stdev=24.26, samples=19 00:36:52.556 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:52.556 cpu : usr=98.62%, sys=0.91%, ctx=53, majf=0, minf=31 00:36:52.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.556 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.556 filename2: (groupid=0, jobs=1): err= 0: pid=3838718: Mon Dec 9 11:51:43 2024 00:36:52.556 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10006msec) 00:36:52.556 slat (nsec): min=5650, max=91612, avg=14034.05, stdev=11900.74 00:36:52.556 clat (usec): min=11982, max=59495, avg=32443.17, stdev=4232.38 00:36:52.556 lat (usec): min=11987, max=59505, avg=32457.21, stdev=4231.31 00:36:52.556 clat percentiles (usec): 00:36:52.556 | 1.00th=[25035], 5.00th=[26084], 10.00th=[27132], 20.00th=[29230], 00:36:52.556 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35914], 95.00th=[40109], 00:36:52.557 | 99.00th=[45876], 99.50th=[50594], 99.90th=[59507], 99.95th=[59507], 00:36:52.557 | 99.99th=[59507] 00:36:52.557 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1962.89, stdev=58.75, samples=19 00:36:52.557 iops : min= 448, max= 512, avg=490.68, stdev=14.63, samples=19 00:36:52.557 lat (msec) : 20=0.41%, 50=98.90%, 100=0.69% 00:36:52.557 cpu : usr=98.84%, sys=0.76%, ctx=64, majf=0, minf=42 00:36:52.557 IO depths : 1=0.1%, 2=0.1%, 4=1.8%, 8=81.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838719: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:36:52.557 slat (nsec): min=5146, max=85026, avg=21132.13, stdev=13386.33 00:36:52.557 clat (usec): min=16890, max=60701, avg=32703.68, stdev=1870.38 00:36:52.557 lat (usec): min=16899, max=60715, avg=32724.81, stdev=1869.51 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.557 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.557 | 99.00th=[41157], 99.50th=[43254], 99.90th=[50070], 99.95th=[50070], 00:36:52.557 | 99.99th=[60556] 00:36:52.557 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1939.26, stdev=76.43, samples=19 00:36:52.557 iops : min= 448, max= 512, avg=484.74, stdev=19.00, samples=19 00:36:52.557 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:52.557 cpu : usr=98.48%, sys=1.05%, ctx=94, majf=0, minf=27 00:36:52.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838720: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10004msec) 00:36:52.557 slat (nsec): min=5425, max=85997, avg=21207.74, stdev=13449.76 00:36:52.557 clat (usec): min=16235, max=50386, avg=32566.80, stdev=1830.95 00:36:52.557 lat (usec): min=16242, max=50401, avg=32588.01, stdev=1830.50 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[23462], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:52.557 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.557 | 99.00th=[35390], 99.50th=[42206], 99.90th=[50594], 99.95th=[50594], 00:36:52.557 | 99.99th=[50594] 00:36:52.557 bw ( KiB/s): min= 1792, max= 2096, per=4.14%, avg=1948.53, stdev=72.63, samples=19 00:36:52.557 iops : min= 448, max= 524, avg=487.05, stdev=18.05, samples=19 00:36:52.557 lat (msec) : 20=0.45%, 50=99.22%, 100=0.33% 00:36:52.557 cpu : usr=98.46%, sys=1.11%, ctx=49, majf=0, minf=30 00:36:52.557 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838721: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10013msec) 00:36:52.557 slat (nsec): min=5757, max=76032, avg=16811.82, stdev=11217.63 00:36:52.557 clat (usec): min=20604, max=42691, avg=32576.27, stdev=1089.93 00:36:52.557 lat (usec): min=20612, max=42700, avg=32593.08, stdev=1090.15 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[28967], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.557 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.557 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:36:52.557 | 99.99th=[42730] 00:36:52.557 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1953.26, stdev=58.18, samples=19 00:36:52.557 iops : min= 479, max= 512, avg=488.32, stdev=14.55, samples=19 00:36:52.557 lat (msec) : 50=100.00% 00:36:52.557 cpu : usr=98.41%, sys=1.11%, ctx=148, majf=0, minf=35 00:36:52.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838722: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:36:52.557 slat (nsec): min=5639, max=60870, avg=18549.28, stdev=10831.24 00:36:52.557 clat (usec): min=11844, max=75111, avg=32734.08, stdev=2815.22 00:36:52.557 lat (usec): min=11849, max=75128, avg=32752.63, stdev=2815.25 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.557 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.557 | 99.00th=[34341], 99.50th=[43779], 99.90th=[74974], 99.95th=[74974], 00:36:52.557 | 99.99th=[74974] 00:36:52.557 bw ( KiB/s): min= 1667, max= 2048, per=4.12%, avg=1939.47, stdev=87.41, samples=19 00:36:52.557 iops : min= 416, max= 512, avg=484.79, stdev=21.93, samples=19 00:36:52.557 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:52.557 cpu : usr=98.66%, sys=0.88%, ctx=63, majf=0, minf=30 00:36:52.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838723: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10004msec) 00:36:52.557 slat (nsec): min=5651, max=79011, avg=18848.00, stdev=13500.27 00:36:52.557 clat (usec): min=16209, max=60463, avg=32407.84, stdev=3317.40 00:36:52.557 lat (usec): min=16216, max=60480, avg=32426.69, stdev=3317.22 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[21365], 5.00th=[26870], 10.00th=[31589], 20.00th=[32375], 00:36:52.557 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[36439], 00:36:52.557 | 99.00th=[43779], 99.50th=[49546], 99.90th=[60556], 99.95th=[60556], 00:36:52.557 | 99.99th=[60556] 00:36:52.557 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1960.53, stdev=66.61, samples=19 00:36:52.557 iops : min= 448, max= 512, avg=490.05, stdev=16.70, samples=19 00:36:52.557 lat (msec) : 20=0.65%, 50=98.86%, 100=0.49% 00:36:52.557 cpu : usr=98.84%, sys=0.86%, ctx=14, majf=0, minf=26 00:36:52.557 IO depths : 1=4.2%, 2=8.6%, 4=18.3%, 8=59.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 filename2: (groupid=0, jobs=1): err= 0: pid=3838724: Mon Dec 9 11:51:43 2024 00:36:52.557 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10008msec) 00:36:52.557 slat (nsec): min=5420, max=52688, avg=10719.94, stdev=7037.96 00:36:52.557 clat (usec): min=20390, max=51184, avg=32607.39, stdev=1573.92 00:36:52.557 lat (usec): min=20397, max=51191, avg=32618.11, stdev=1573.73 00:36:52.557 clat percentiles (usec): 00:36:52.557 | 1.00th=[23725], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:52.557 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:52.557 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:52.557 | 99.00th=[33817], 99.50th=[34341], 99.90th=[51119], 99.95th=[51119], 00:36:52.557 | 99.99th=[51119] 00:36:52.557 bw ( KiB/s): min= 1840, max= 2048, per=4.14%, avg=1952.95, stdev=63.07, samples=19 00:36:52.557 iops : min= 460, max= 512, avg=488.16, stdev=15.65, samples=19 00:36:52.557 lat (msec) : 50=99.88%, 100=0.12% 00:36:52.557 cpu : usr=98.85%, sys=0.74%, ctx=45, majf=0, minf=19 00:36:52.557 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:52.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.557 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:52.557 00:36:52.557 Run status group 0 (all jobs): 00:36:52.557 READ: bw=46.0MiB/s (48.2MB/s), 1945KiB/s-2038KiB/s (1991kB/s-2087kB/s), io=461MiB (483MB), run=10001-10020msec 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.557 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 bdev_null0 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 [2024-12-09 11:51:43.416804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 bdev_null1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.558 { 00:36:52.558 "params": { 00:36:52.558 "name": "Nvme$subsystem", 00:36:52.558 "trtype": "$TEST_TRANSPORT", 00:36:52.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.558 "adrfam": "ipv4", 00:36:52.558 "trsvcid": "$NVMF_PORT", 00:36:52.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.558 "hdgst": ${hdgst:-false}, 00:36:52.558 "ddgst": ${ddgst:-false} 00:36:52.558 }, 00:36:52.558 "method": "bdev_nvme_attach_controller" 00:36:52.558 } 00:36:52.558 EOF 00:36:52.558 )") 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.558 { 00:36:52.558 "params": { 00:36:52.558 "name": "Nvme$subsystem", 00:36:52.558 "trtype": "$TEST_TRANSPORT", 00:36:52.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.558 "adrfam": "ipv4", 00:36:52.558 "trsvcid": "$NVMF_PORT", 00:36:52.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.558 "hdgst": ${hdgst:-false}, 00:36:52.558 "ddgst": ${ddgst:-false} 00:36:52.558 }, 00:36:52.558 "method": "bdev_nvme_attach_controller" 00:36:52.558 } 00:36:52.558 EOF 00:36:52.558 )") 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:52.558 11:51:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:52.559 "params": { 00:36:52.559 "name": "Nvme0", 00:36:52.559 "trtype": "tcp", 00:36:52.559 "traddr": "10.0.0.2", 00:36:52.559 "adrfam": "ipv4", 00:36:52.559 "trsvcid": "4420", 00:36:52.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.559 "hdgst": false, 00:36:52.559 "ddgst": false 00:36:52.559 }, 00:36:52.559 "method": "bdev_nvme_attach_controller" 00:36:52.559 },{ 00:36:52.559 "params": { 00:36:52.559 "name": "Nvme1", 00:36:52.559 "trtype": "tcp", 00:36:52.559 "traddr": "10.0.0.2", 00:36:52.559 "adrfam": "ipv4", 00:36:52.559 "trsvcid": "4420", 00:36:52.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:52.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:52.559 "hdgst": false, 00:36:52.559 "ddgst": false 00:36:52.559 }, 00:36:52.559 "method": "bdev_nvme_attach_controller" 00:36:52.559 }' 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:52.559 11:51:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.559 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:52.559 ... 00:36:52.559 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:52.559 ... 00:36:52.559 fio-3.35 00:36:52.559 Starting 4 threads 00:36:57.824 00:36:57.824 filename0: (groupid=0, jobs=1): err= 0: pid=3840999: Mon Dec 9 11:51:49 2024 00:36:57.824 read: IOPS=2099, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5003msec) 00:36:57.824 slat (nsec): min=5503, max=61557, avg=7525.03, stdev=3017.57 00:36:57.824 clat (usec): min=809, max=6399, avg=3791.41, stdev=615.69 00:36:57.824 lat (usec): min=825, max=6408, avg=3798.94, stdev=615.50 00:36:57.824 clat percentiles (usec): 00:36:57.824 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3294], 20.00th=[ 3425], 00:36:57.824 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:36:57.824 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4948], 95.00th=[ 5276], 00:36:57.824 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6325], 00:36:57.824 | 99.99th=[ 6390] 00:36:57.824 bw ( KiB/s): min=16512, max=17376, per=25.34%, avg=16796.80, stdev=273.60, samples=10 00:36:57.824 iops : min= 2064, max= 2172, avg=2099.60, stdev=34.20, samples=10 00:36:57.824 lat (usec) : 1000=0.01% 00:36:57.824 lat (msec) : 2=0.08%, 4=83.30%, 10=16.61% 00:36:57.824 cpu : usr=96.42%, sys=3.30%, ctx=6, majf=0, minf=9 00:36:57.824 IO depths : 1=0.1%, 2=0.1%, 4=69.7%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.824 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 issued rwts: total=10503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.825 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:57.825 filename0: (groupid=0, jobs=1): err= 0: pid=3841000: Mon Dec 9 11:51:49 2024 00:36:57.825 read: IOPS=2039, BW=15.9MiB/s (16.7MB/s)(79.7MiB/5002msec) 00:36:57.825 slat (nsec): min=5490, max=35195, avg=7564.90, stdev=2689.82 00:36:57.825 clat (usec): min=1360, max=49958, avg=3901.89, stdev=1449.50 00:36:57.825 lat (usec): min=1366, max=49981, avg=3909.45, stdev=1449.58 00:36:57.825 clat percentiles (usec): 00:36:57.825 | 1.00th=[ 2933], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3458], 00:36:57.825 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:36:57.825 | 70.00th=[ 3818], 80.00th=[ 4015], 90.00th=[ 5276], 95.00th=[ 5473], 00:36:57.825 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6390], 99.95th=[50070], 00:36:57.825 | 99.99th=[50070] 00:36:57.825 bw ( KiB/s): min=15312, max=16896, per=24.57%, avg=16286.22, stdev=468.08, samples=9 00:36:57.825 iops : min= 1914, max= 2112, avg=2035.78, stdev=58.51, samples=9 00:36:57.825 lat (msec) : 2=0.03%, 4=79.85%, 10=20.05%, 50=0.08% 00:36:57.825 cpu : usr=96.58%, sys=3.14%, ctx=8, majf=0, minf=9 00:36:57.825 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 issued rwts: total=10201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.825 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:57.825 filename1: (groupid=0, jobs=1): err= 0: pid=3841001: Mon Dec 9 11:51:49 2024 00:36:57.825 read: IOPS=2096, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5002msec) 00:36:57.825 slat (nsec): min=5485, max=58587, avg=7533.52, stdev=3144.35 00:36:57.825 clat (usec): min=1926, max=6389, avg=3796.48, stdev=630.23 00:36:57.825 lat (usec): min=1932, max=6395, avg=3804.02, stdev=630.20 00:36:57.825 clat percentiles (usec): 00:36:57.825 | 1.00th=[ 2671], 5.00th=[ 3097], 10.00th=[ 3294], 20.00th=[ 3458], 00:36:57.825 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3720], 00:36:57.825 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 5211], 95.00th=[ 5276], 00:36:57.825 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6063], 99.95th=[ 6128], 00:36:57.825 | 99.99th=[ 6390] 00:36:57.825 bw ( KiB/s): min=16480, max=17232, per=25.31%, avg=16774.40, stdev=244.19, samples=10 00:36:57.825 iops : min= 2060, max= 2154, avg=2096.80, stdev=30.52, samples=10 00:36:57.825 lat (msec) : 2=0.03%, 4=82.93%, 10=17.04% 00:36:57.825 cpu : usr=96.34%, sys=3.40%, ctx=6, majf=0, minf=9 00:36:57.825 IO depths : 1=0.1%, 2=0.1%, 4=70.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 issued rwts: total=10487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.825 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:57.825 filename1: (groupid=0, jobs=1): err= 0: pid=3841002: Mon Dec 9 11:51:49 2024 00:36:57.825 read: IOPS=2051, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5001msec) 00:36:57.825 slat (nsec): min=5493, max=48007, avg=7400.78, stdev=2715.27 00:36:57.825 clat (usec): min=1132, max=6960, avg=3880.59, stdev=673.88 00:36:57.825 lat (usec): min=1138, max=6984, avg=3887.99, stdev=673.72 00:36:57.825 clat percentiles (usec): 00:36:57.825 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3490], 00:36:57.825 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:36:57.825 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 5276], 95.00th=[ 5473], 00:36:57.825 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6652], 99.95th=[ 6718], 00:36:57.825 | 99.99th=[ 6915] 00:36:57.825 bw ( KiB/s): min=15952, max=16656, per=24.75%, avg=16403.56, stdev=243.68, samples=9 00:36:57.825 iops : min= 1994, max= 2082, avg=2050.44, stdev=30.46, samples=9 00:36:57.825 lat (msec) : 2=0.03%, 4=79.64%, 10=20.34% 00:36:57.825 cpu : usr=96.78%, sys=2.96%, ctx=6, majf=0, minf=9 00:36:57.825 IO depths : 1=0.1%, 2=0.1%, 4=72.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.825 issued rwts: total=10258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.825 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:57.825 00:36:57.825 Run status group 0 (all jobs): 00:36:57.825 READ: bw=64.7MiB/s (67.9MB/s), 15.9MiB/s-16.4MiB/s (16.7MB/s-17.2MB/s), io=324MiB (340MB), run=5001-5003msec 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 00:36:57.825 real 0m24.552s 00:36:57.825 user 5m16.056s 00:36:57.825 sys 0m4.667s 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 ************************************ 00:36:57.825 END TEST fio_dif_rand_params 00:36:57.825 ************************************ 00:36:57.825 11:51:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:57.825 11:51:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:57.825 11:51:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 ************************************ 00:36:57.825 START TEST fio_dif_digest 00:36:57.825 ************************************ 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 bdev_null0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.825 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:58.084 [2024-12-09 11:51:49.986384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.084 { 00:36:58.084 "params": { 00:36:58.084 "name": "Nvme$subsystem", 00:36:58.084 "trtype": "$TEST_TRANSPORT", 00:36:58.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.084 "adrfam": "ipv4", 00:36:58.084 "trsvcid": "$NVMF_PORT", 00:36:58.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.084 "hdgst": ${hdgst:-false}, 00:36:58.084 "ddgst": ${ddgst:-false} 00:36:58.084 }, 00:36:58.084 "method": "bdev_nvme_attach_controller" 00:36:58.084 } 00:36:58.084 EOF 00:36:58.084 )") 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.084 11:51:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:58.084 "params": { 00:36:58.084 "name": "Nvme0", 00:36:58.084 "trtype": "tcp", 00:36:58.084 "traddr": "10.0.0.2", 00:36:58.084 "adrfam": "ipv4", 00:36:58.084 "trsvcid": "4420", 00:36:58.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.084 "hdgst": true, 00:36:58.084 "ddgst": true 00:36:58.084 }, 00:36:58.084 "method": "bdev_nvme_attach_controller" 00:36:58.084 }' 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:58.084 11:51:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.342 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:58.342 ... 00:36:58.342 fio-3.35 00:36:58.342 Starting 3 threads 00:37:10.562 00:37:10.562 filename0: (groupid=0, jobs=1): err= 0: pid=3842465: Mon Dec 9 11:52:01 2024 00:37:10.562 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(277MiB/10044msec) 00:37:10.562 slat (nsec): min=5869, max=38171, avg=7280.14, stdev=1456.76 00:37:10.562 clat (usec): min=7829, max=54789, avg=13593.85, stdev=2790.81 00:37:10.562 lat (usec): min=7836, max=54795, avg=13601.13, stdev=2790.87 00:37:10.562 clat percentiles (usec): 00:37:10.562 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[11600], 20.00th=[12649], 00:37:10.562 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:37:10.562 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:37:10.562 | 99.00th=[16450], 99.50th=[17433], 99.90th=[53740], 99.95th=[54264], 00:37:10.562 | 99.99th=[54789] 00:37:10.562 bw ( KiB/s): min=25856, max=30720, per=34.97%, avg=28288.00, stdev=1090.87, samples=20 00:37:10.562 iops : min= 202, max= 240, avg=221.00, stdev= 8.52, samples=20 00:37:10.562 lat (msec) : 10=4.02%, 20=95.61%, 50=0.05%, 100=0.32% 00:37:10.562 cpu : usr=93.44%, sys=6.31%, ctx=22, majf=0, minf=146 00:37:10.562 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.562 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.562 filename0: (groupid=0, jobs=1): err= 0: pid=3842466: Mon Dec 9 11:52:01 2024 00:37:10.562 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(271MiB/10048msec) 00:37:10.562 slat (nsec): min=5904, max=42928, avg=6973.60, stdev=1575.06 00:37:10.562 clat (usec): min=8417, max=57221, avg=13900.93, stdev=2844.74 00:37:10.562 lat (usec): min=8424, max=57230, avg=13907.91, stdev=2844.85 00:37:10.562 clat percentiles (usec): 00:37:10.562 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[12125], 20.00th=[12911], 00:37:10.562 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:37:10.562 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:37:10.562 | 99.00th=[16909], 99.50th=[18482], 99.90th=[56361], 99.95th=[56886], 00:37:10.562 | 99.99th=[57410] 00:37:10.562 bw ( KiB/s): min=24064, max=30464, per=34.21%, avg=27673.60, stdev=1450.30, samples=20 00:37:10.562 iops : min= 188, max= 238, avg=216.20, stdev=11.33, samples=20 00:37:10.562 lat (msec) : 10=3.00%, 20=96.63%, 50=0.05%, 100=0.32% 00:37:10.562 cpu : usr=94.05%, sys=5.70%, ctx=24, majf=0, minf=159 00:37:10.562 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.562 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.562 filename0: (groupid=0, jobs=1): err= 0: pid=3842467: Mon Dec 9 11:52:01 2024 00:37:10.562 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10047msec) 00:37:10.562 slat (nsec): min=5743, max=32905, avg=7385.49, stdev=1658.28 00:37:10.562 clat (usec): min=8496, max=95922, avg=15238.19, stdev=5975.15 00:37:10.562 lat (usec): min=8503, max=95928, avg=15245.57, stdev=5975.09 00:37:10.562 clat percentiles (usec): 00:37:10.562 | 1.00th=[10421], 5.00th=[12387], 10.00th=[13042], 20.00th=[13566], 00:37:10.562 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:37:10.562 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[16712], 00:37:10.562 | 99.00th=[54789], 99.50th=[55837], 99.90th=[95945], 99.95th=[95945], 00:37:10.562 | 99.99th=[95945] 00:37:10.562 bw ( KiB/s): min=19968, max=27392, per=31.20%, avg=25241.60, stdev=2177.62, samples=20 00:37:10.562 iops : min= 156, max= 214, avg=197.20, stdev=17.01, samples=20 00:37:10.563 lat (msec) : 10=0.41%, 20=97.72%, 50=0.15%, 100=1.72% 00:37:10.563 cpu : usr=94.18%, sys=5.56%, ctx=28, majf=0, minf=188 00:37:10.563 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.563 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.563 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.563 00:37:10.563 Run status group 0 (all jobs): 00:37:10.563 READ: bw=79.0MiB/s (82.8MB/s), 24.6MiB/s-27.5MiB/s (25.8MB/s-28.9MB/s), io=794MiB (832MB), run=10044-10048msec 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.563 00:37:10.563 real 0m11.280s 00:37:10.563 user 0m44.267s 00:37:10.563 sys 0m2.074s 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.563 11:52:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:10.563 ************************************ 00:37:10.563 END TEST fio_dif_digest 00:37:10.563 ************************************ 00:37:10.563 11:52:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:10.563 11:52:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.563 rmmod nvme_tcp 00:37:10.563 rmmod nvme_fabrics 00:37:10.563 rmmod nvme_keyring 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3831546 ']' 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3831546 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3831546 ']' 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3831546 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3831546 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3831546' 00:37:10.563 killing process with pid 3831546 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3831546 00:37:10.563 11:52:01 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3831546 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:10.563 11:52:01 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:13.102 Waiting for block devices as requested 00:37:13.102 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:13.102 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:13.102 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:13.363 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:13.363 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:13.363 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:13.363 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:13.622 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:13.622 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:13.882 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:13.882 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:13.882 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:14.142 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:14.142 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:14.142 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:14.142 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:14.402 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.662 11:52:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.662 11:52:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:14.662 11:52:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.572 11:52:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.572 00:37:16.572 real 1m18.055s 00:37:16.572 user 7m58.984s 00:37:16.572 sys 0m22.183s 00:37:16.572 11:52:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.572 11:52:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.572 ************************************ 00:37:16.572 END TEST nvmf_dif 00:37:16.572 ************************************ 00:37:16.833 11:52:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:16.833 11:52:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:16.833 11:52:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.833 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:37:16.833 ************************************ 00:37:16.833 START TEST nvmf_abort_qd_sizes 00:37:16.833 ************************************ 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:16.833 * Looking for test storage... 00:37:16.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.833 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.094 11:52:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:17.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.094 --rc genhtml_branch_coverage=1 00:37:17.094 --rc genhtml_function_coverage=1 00:37:17.094 --rc genhtml_legend=1 00:37:17.094 --rc geninfo_all_blocks=1 00:37:17.094 --rc geninfo_unexecuted_blocks=1 00:37:17.094 00:37:17.094 ' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:17.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.094 --rc genhtml_branch_coverage=1 00:37:17.094 --rc genhtml_function_coverage=1 00:37:17.094 --rc genhtml_legend=1 00:37:17.094 --rc geninfo_all_blocks=1 00:37:17.094 --rc geninfo_unexecuted_blocks=1 00:37:17.094 00:37:17.094 ' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:17.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.094 --rc genhtml_branch_coverage=1 00:37:17.094 --rc genhtml_function_coverage=1 00:37:17.094 --rc genhtml_legend=1 00:37:17.094 --rc geninfo_all_blocks=1 00:37:17.094 --rc geninfo_unexecuted_blocks=1 00:37:17.094 00:37:17.094 ' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:17.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.094 --rc genhtml_branch_coverage=1 00:37:17.094 --rc genhtml_function_coverage=1 00:37:17.094 --rc genhtml_legend=1 00:37:17.094 --rc geninfo_all_blocks=1 00:37:17.094 --rc geninfo_unexecuted_blocks=1 00:37:17.094 00:37:17.094 ' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.094 11:52:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:17.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:17.095 11:52:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:25.236 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:25.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:25.236 Found net devices under 0000:31:00.0: cvl_0_0 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:25.236 Found net devices under 0000:31:00.1: cvl_0_1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:25.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:25.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:37:25.236 00:37:25.236 --- 10.0.0.2 ping statistics --- 00:37:25.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.236 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:25.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:25.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:37:25.236 00:37:25.236 --- 10.0.0.1 ping statistics --- 00:37:25.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.236 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:25.236 11:52:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:27.781 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:27.781 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3852059 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3852059 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3852059 ']' 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:28.353 11:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:28.353 [2024-12-09 11:52:20.311812] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:37:28.353 [2024-12-09 11:52:20.311860] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.353 [2024-12-09 11:52:20.390036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:28.353 [2024-12-09 11:52:20.427142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.353 [2024-12-09 11:52:20.427173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.353 [2024-12-09 11:52:20.427180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.353 [2024-12-09 11:52:20.427187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.353 [2024-12-09 11:52:20.427193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.353 [2024-12-09 11:52:20.428707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.353 [2024-12-09 11:52:20.428819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:28.353 [2024-12-09 11:52:20.428972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.353 [2024-12-09 11:52:20.428973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.294 11:52:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:29.294 ************************************ 00:37:29.294 START TEST spdk_target_abort 00:37:29.294 ************************************ 00:37:29.294 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:29.294 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:29.294 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:29.294 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.294 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:29.555 spdk_targetn1 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:29.555 [2024-12-09 11:52:21.524126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:29.555 [2024-12-09 11:52:21.576411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:29.555 11:52:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.817 [2024-12-09 11:52:21.848507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:272 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.848534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.849069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:312 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.849080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.864481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:816 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.864498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.871442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1024 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1616 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00cb p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.895521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1896 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.895536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.920431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2800 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.950346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3776 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.950365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:37:29.817 [2024-12-09 11:52:21.958771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4072 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:29.817 [2024-12-09 11:52:21.958785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fe p:0 m:0 dnr:0 00:37:33.116 Initializing NVMe Controllers 00:37:33.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:33.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:33.116 Initialization complete. Launching workers. 00:37:33.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13514, failed: 9 00:37:33.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3304, failed to submit 10219 00:37:33.116 success 805, unsuccessful 2499, failed 0 00:37:33.116 11:52:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:33.116 11:52:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:33.116 [2024-12-09 11:52:25.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:640 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:33.116 [2024-12-09 11:52:25.073294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:37:33.116 [2024-12-09 11:52:25.119921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1728 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:33.116 [2024-12-09 11:52:25.119946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:37:33.116 [2024-12-09 11:52:25.127285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1816 len:8 PRP1 0x200004e54000 PRP2 0x0 00:37:33.116 [2024-12-09 11:52:25.127306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00e8 p:1 m:0 dnr:0 00:37:33.116 [2024-12-09 11:52:25.191114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:3192 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:37:33.116 [2024-12-09 11:52:25.191137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0098 p:0 m:0 dnr:0 00:37:33.116 [2024-12-09 11:52:25.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:3960 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:33.116 [2024-12-09 11:52:25.223233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00fb p:0 m:0 dnr:0 00:37:33.377 [2024-12-09 11:52:25.492067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:10080 len:8 PRP1 0x200004e46000 PRP2 0x0 00:37:33.377 [2024-12-09 11:52:25.492096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00f0 p:1 m:0 dnr:0 00:37:35.917 [2024-12-09 11:52:27.715257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:60800 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:35.917 [2024-12-09 11:52:27.715300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:37:36.177 Initializing NVMe Controllers 00:37:36.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:36.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:36.177 Initialization complete. Launching workers. 00:37:36.177 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8553, failed: 7 00:37:36.177 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1167, failed to submit 7393 00:37:36.177 success 344, unsuccessful 823, failed 0 00:37:36.177 11:52:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:36.177 11:52:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.437 [2024-12-09 11:52:28.370722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:160 nsid:1 lba:3392 len:8 PRP1 0x200004b14000 PRP2 0x0 00:37:36.437 [2024-12-09 11:52:28.370746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:160 cdw0:0 sqhd:008b p:1 m:0 dnr:0 00:37:37.819 [2024-12-09 11:52:29.895977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:172752 len:8 PRP1 0x200004af0000 PRP2 0x0 00:37:37.819 [2024-12-09 11:52:29.895999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:39.732 Initializing NVMe Controllers 00:37:39.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:39.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:39.732 Initialization complete. Launching workers. 00:37:39.732 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41750, failed: 2 00:37:39.732 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2689, failed to submit 39063 00:37:39.732 success 609, unsuccessful 2080, failed 0 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.732 11:52:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3852059 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3852059 ']' 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3852059 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.115 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852059 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852059' 00:37:41.375 killing process with pid 3852059 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3852059 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3852059 00:37:41.375 00:37:41.375 real 0m12.214s 00:37:41.375 user 0m49.936s 00:37:41.375 sys 0m1.831s 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:41.375 ************************************ 00:37:41.375 END TEST spdk_target_abort 00:37:41.375 ************************************ 00:37:41.375 11:52:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:41.375 11:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:41.375 11:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:41.375 11:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:41.375 ************************************ 00:37:41.375 START TEST kernel_target_abort 00:37:41.375 ************************************ 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:41.375 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:41.635 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:41.635 11:52:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:44.932 Waiting for block devices as requested 00:37:44.932 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:44.932 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:45.192 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:45.192 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:45.452 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:45.452 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:45.452 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:45.452 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:45.712 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:45.712 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:45.712 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:45.971 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:46.232 No valid GPT data, bailing 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:46.232 00:37:46.232 Discovery Log Number of Records 2, Generation counter 2 00:37:46.232 =====Discovery Log Entry 0====== 00:37:46.232 trtype: tcp 00:37:46.232 adrfam: ipv4 00:37:46.232 subtype: current discovery subsystem 00:37:46.232 treq: not specified, sq flow control disable supported 00:37:46.232 portid: 1 00:37:46.232 trsvcid: 4420 00:37:46.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:46.232 traddr: 10.0.0.1 00:37:46.232 eflags: none 00:37:46.232 sectype: none 00:37:46.232 =====Discovery Log Entry 1====== 00:37:46.232 trtype: tcp 00:37:46.232 adrfam: ipv4 00:37:46.232 subtype: nvme subsystem 00:37:46.232 treq: not specified, sq flow control disable supported 00:37:46.232 portid: 1 00:37:46.232 trsvcid: 4420 00:37:46.232 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:46.232 traddr: 10.0.0.1 00:37:46.232 eflags: none 00:37:46.232 sectype: none 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:46.232 11:52:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:49.529 Initializing NVMe Controllers 00:37:49.529 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:49.529 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:49.529 Initialization complete. Launching workers. 00:37:49.529 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67064, failed: 0 00:37:49.529 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67064, failed to submit 0 00:37:49.529 success 0, unsuccessful 67064, failed 0 00:37:49.529 11:52:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:49.529 11:52:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:52.825 Initializing NVMe Controllers 00:37:52.825 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:52.825 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:52.825 Initialization complete. Launching workers. 00:37:52.825 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108279, failed: 0 00:37:52.825 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27266, failed to submit 81013 00:37:52.825 success 0, unsuccessful 27266, failed 0 00:37:52.825 11:52:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:52.825 11:52:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:56.125 Initializing NVMe Controllers 00:37:56.125 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:56.125 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:56.125 Initialization complete. Launching workers. 00:37:56.125 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101957, failed: 0 00:37:56.125 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25502, failed to submit 76455 00:37:56.125 success 0, unsuccessful 25502, failed 0 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:56.125 11:52:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:59.423 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:59.423 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:01.334 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:01.334 00:38:01.334 real 0m19.879s 00:38:01.334 user 0m9.773s 00:38:01.334 sys 0m5.797s 00:38:01.334 11:52:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.334 11:52:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.334 ************************************ 00:38:01.334 END TEST kernel_target_abort 00:38:01.334 ************************************ 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:01.334 rmmod nvme_tcp 00:38:01.334 rmmod nvme_fabrics 00:38:01.334 rmmod nvme_keyring 00:38:01.334 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3852059 ']' 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3852059 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3852059 ']' 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3852059 00:38:01.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3852059) - No such process 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3852059 is not found' 00:38:01.595 Process with pid 3852059 is not found 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:01.595 11:52:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:04.895 Waiting for block devices as requested 00:38:04.895 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:04.895 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:04.896 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:04.896 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:04.896 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:05.155 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:05.155 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:05.155 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:05.415 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:05.415 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:05.675 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:05.675 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:05.675 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:05.675 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:05.934 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:05.934 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:05.934 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:06.195 11:52:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.737 11:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:08.737 00:38:08.737 real 0m51.630s 00:38:08.737 user 1m4.855s 00:38:08.737 sys 0m18.581s 00:38:08.737 11:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.737 11:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:08.737 ************************************ 00:38:08.737 END TEST nvmf_abort_qd_sizes 00:38:08.737 ************************************ 00:38:08.737 11:53:00 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:08.737 11:53:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:08.737 11:53:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.737 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:38:08.737 ************************************ 00:38:08.737 START TEST keyring_file 00:38:08.737 ************************************ 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:08.737 * Looking for test storage... 00:38:08.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:08.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.737 --rc genhtml_branch_coverage=1 00:38:08.737 --rc genhtml_function_coverage=1 00:38:08.737 --rc genhtml_legend=1 00:38:08.737 --rc geninfo_all_blocks=1 00:38:08.737 --rc geninfo_unexecuted_blocks=1 00:38:08.737 00:38:08.737 ' 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:08.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.737 --rc genhtml_branch_coverage=1 00:38:08.737 --rc genhtml_function_coverage=1 00:38:08.737 --rc genhtml_legend=1 00:38:08.737 --rc geninfo_all_blocks=1 00:38:08.737 --rc geninfo_unexecuted_blocks=1 00:38:08.737 00:38:08.737 ' 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:08.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.737 --rc genhtml_branch_coverage=1 00:38:08.737 --rc genhtml_function_coverage=1 00:38:08.737 --rc genhtml_legend=1 00:38:08.737 --rc geninfo_all_blocks=1 00:38:08.737 --rc geninfo_unexecuted_blocks=1 00:38:08.737 00:38:08.737 ' 00:38:08.737 11:53:00 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:08.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.737 --rc genhtml_branch_coverage=1 00:38:08.737 --rc genhtml_function_coverage=1 00:38:08.737 --rc genhtml_legend=1 00:38:08.737 --rc geninfo_all_blocks=1 00:38:08.737 --rc geninfo_unexecuted_blocks=1 00:38:08.737 00:38:08.737 ' 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:08.737 11:53:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:08.737 11:53:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:08.737 11:53:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.737 11:53:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.737 11:53:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.737 11:53:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:08.737 11:53:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:08.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:08.737 11:53:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:08.737 11:53:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:08.737 11:53:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:08.737 11:53:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:08.737 11:53:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:08.737 11:53:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.azWV190qib 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.azWV190qib 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.azWV190qib 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.azWV190qib 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.x9NejaSMrK 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:08.738 11:53:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.x9NejaSMrK 00:38:08.738 11:53:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.x9NejaSMrK 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.x9NejaSMrK 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=3862275 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3862275 00:38:08.738 11:53:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3862275 ']' 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.738 11:53:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:08.997 [2024-12-09 11:53:00.932575] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:38:08.997 [2024-12-09 11:53:00.932652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862275 ] 00:38:08.997 [2024-12-09 11:53:01.010130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.997 [2024-12-09 11:53:01.053216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.566 11:53:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.566 11:53:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:09.566 11:53:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:09.566 11:53:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.566 11:53:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:09.827 [2024-12-09 11:53:01.729872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.827 null0 00:38:09.827 [2024-12-09 11:53:01.761922] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:09.827 [2024-12-09 11:53:01.762268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.827 11:53:01 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:09.827 [2024-12-09 11:53:01.793991] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:09.827 request: 00:38:09.827 { 00:38:09.827 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.827 "secure_channel": false, 00:38:09.827 "listen_address": { 00:38:09.827 "trtype": "tcp", 00:38:09.827 "traddr": "127.0.0.1", 00:38:09.827 "trsvcid": "4420" 00:38:09.827 }, 00:38:09.827 "method": "nvmf_subsystem_add_listener", 00:38:09.827 "req_id": 1 00:38:09.827 } 00:38:09.827 Got JSON-RPC error response 00:38:09.827 response: 00:38:09.827 { 00:38:09.827 "code": -32602, 00:38:09.827 "message": "Invalid parameters" 00:38:09.827 } 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:09.827 11:53:01 keyring_file -- keyring/file.sh@47 -- # bperfpid=3862358 00:38:09.827 11:53:01 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3862358 /var/tmp/bperf.sock 00:38:09.827 11:53:01 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3862358 ']' 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:09.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.827 11:53:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:09.827 [2024-12-09 11:53:01.853234] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:38:09.828 [2024-12-09 11:53:01.853282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862358 ] 00:38:09.828 [2024-12-09 11:53:01.929392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.828 [2024-12-09 11:53:01.970601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.088 11:53:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.088 11:53:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:10.088 11:53:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:10.088 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:10.088 11:53:02 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.x9NejaSMrK 00:38:10.088 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.x9NejaSMrK 00:38:10.349 11:53:02 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:10.349 11:53:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:10.349 11:53:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.349 11:53:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.349 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.609 11:53:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.azWV190qib == \/\t\m\p\/\t\m\p\.\a\z\W\V\1\9\0\q\i\b ]] 00:38:10.609 11:53:02 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:10.609 11:53:02 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:10.609 11:53:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.x9NejaSMrK == \/\t\m\p\/\t\m\p\.\x\9\N\e\j\a\S\M\r\K ]] 00:38:10.609 11:53:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.609 11:53:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.869 11:53:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:10.869 11:53:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:10.869 11:53:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:10.869 11:53:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.869 11:53:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.869 11:53:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:10.869 11:53:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.128 11:53:03 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:11.129 11:53:03 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:11.129 11:53:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:11.129 [2024-12-09 11:53:03.219859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:11.388 nvme0n1 00:38:11.388 11:53:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.388 11:53:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:11.388 11:53:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.388 11:53:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:11.648 11:53:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:11.648 11:53:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:11.648 Running I/O for 1 seconds... 00:38:12.849 17030.00 IOPS, 66.52 MiB/s 00:38:12.849 Latency(us) 00:38:12.849 [2024-12-09T10:53:05.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.849 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:12.849 nvme0n1 : 1.01 17033.07 66.54 0.00 0.00 7486.15 6062.08 17148.59 00:38:12.849 [2024-12-09T10:53:05.011Z] =================================================================================================================== 00:38:12.849 [2024-12-09T10:53:05.012Z] Total : 17033.07 66.54 0.00 0.00 7486.15 6062.08 17148.59 00:38:12.850 { 00:38:12.850 "results": [ 00:38:12.850 { 00:38:12.850 "job": "nvme0n1", 00:38:12.850 "core_mask": "0x2", 00:38:12.850 "workload": "randrw", 00:38:12.850 "percentage": 50, 00:38:12.850 "status": "finished", 00:38:12.850 "queue_depth": 128, 00:38:12.850 "io_size": 4096, 00:38:12.850 "runtime": 1.007393, 00:38:12.850 "iops": 17033.07448036665, 00:38:12.850 "mibps": 66.53544718893222, 00:38:12.850 "io_failed": 0, 00:38:12.850 "io_timeout": 0, 00:38:12.850 "avg_latency_us": 7486.149163315656, 00:38:12.850 "min_latency_us": 6062.08, 00:38:12.850 "max_latency_us": 17148.586666666666 00:38:12.850 } 00:38:12.850 ], 00:38:12.850 "core_count": 1 00:38:12.850 } 00:38:12.850 11:53:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:12.850 11:53:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:12.850 11:53:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.112 11:53:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:13.112 11:53:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:13.112 11:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:13.112 11:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.112 11:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.112 11:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:13.112 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.372 11:53:05 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:13.372 11:53:05 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:13.372 11:53:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:13.372 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:13.372 [2024-12-09 11:53:05.464527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:13.372 [2024-12-09 11:53:05.465265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48630 (107): Transport endpoint is not connected 00:38:13.372 [2024-12-09 11:53:05.466260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48630 (9): Bad file descriptor 00:38:13.372 [2024-12-09 11:53:05.467262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:13.372 [2024-12-09 11:53:05.467269] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:13.372 [2024-12-09 11:53:05.467274] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:13.372 [2024-12-09 11:53:05.467280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:13.372 request: 00:38:13.372 { 00:38:13.372 "name": "nvme0", 00:38:13.372 "trtype": "tcp", 00:38:13.372 "traddr": "127.0.0.1", 00:38:13.372 "adrfam": "ipv4", 00:38:13.373 "trsvcid": "4420", 00:38:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:13.373 "prchk_reftag": false, 00:38:13.373 "prchk_guard": false, 00:38:13.373 "hdgst": false, 00:38:13.373 "ddgst": false, 00:38:13.373 "psk": "key1", 00:38:13.373 "allow_unrecognized_csi": false, 00:38:13.373 "method": "bdev_nvme_attach_controller", 00:38:13.373 "req_id": 1 00:38:13.373 } 00:38:13.373 Got JSON-RPC error response 00:38:13.373 response: 00:38:13.373 { 00:38:13.373 "code": -5, 00:38:13.373 "message": "Input/output error" 00:38:13.373 } 00:38:13.373 11:53:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:13.373 11:53:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:13.373 11:53:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:13.373 11:53:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:13.373 11:53:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:13.373 11:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:13.373 11:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.373 11:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.373 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.373 11:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:13.633 11:53:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:13.633 11:53:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:13.633 11:53:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:13.633 11:53:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.633 11:53:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.633 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.633 11:53:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:13.892 11:53:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:13.892 11:53:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:13.892 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:13.892 11:53:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:13.892 11:53:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:14.152 11:53:06 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:14.152 11:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:14.152 11:53:06 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:14.412 11:53:06 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:14.412 11:53:06 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.412 [2024-12-09 11:53:06.475891] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.azWV190qib': 0100660 00:38:14.412 [2024-12-09 11:53:06.475911] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:14.412 request: 00:38:14.412 { 00:38:14.412 "name": "key0", 00:38:14.412 "path": "/tmp/tmp.azWV190qib", 00:38:14.412 "method": "keyring_file_add_key", 00:38:14.412 "req_id": 1 00:38:14.412 } 00:38:14.412 Got JSON-RPC error response 00:38:14.412 response: 00:38:14.412 { 00:38:14.412 "code": -1, 00:38:14.412 "message": "Operation not permitted" 00:38:14.412 } 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:14.412 11:53:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:14.412 11:53:06 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.412 11:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azWV190qib 00:38:14.671 11:53:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.azWV190qib 00:38:14.671 11:53:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:14.671 11:53:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:14.671 11:53:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:14.671 11:53:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:14.671 11:53:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:14.671 11:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:14.932 11:53:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:14.932 11:53:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.932 11:53:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:14.932 11:53:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:14.932 [2024-12-09 11:53:06.993208] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.azWV190qib': No such file or directory 00:38:14.932 [2024-12-09 11:53:06.993224] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:14.932 [2024-12-09 11:53:06.993237] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:14.932 [2024-12-09 11:53:06.993243] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:14.932 [2024-12-09 11:53:06.993249] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:14.932 [2024-12-09 11:53:06.993254] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:14.932 request: 00:38:14.932 { 00:38:14.932 "name": "nvme0", 00:38:14.932 "trtype": "tcp", 00:38:14.932 "traddr": "127.0.0.1", 00:38:14.932 "adrfam": "ipv4", 00:38:14.932 "trsvcid": "4420", 00:38:14.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.932 "prchk_reftag": false, 00:38:14.932 "prchk_guard": false, 00:38:14.932 "hdgst": false, 00:38:14.932 "ddgst": false, 00:38:14.932 "psk": "key0", 00:38:14.932 "allow_unrecognized_csi": false, 00:38:14.932 "method": "bdev_nvme_attach_controller", 00:38:14.932 "req_id": 1 00:38:14.932 } 00:38:14.932 Got JSON-RPC error response 00:38:14.932 response: 00:38:14.932 { 00:38:14.932 "code": -19, 00:38:14.932 "message": "No such device" 00:38:14.932 } 00:38:14.932 11:53:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:14.932 11:53:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:14.932 11:53:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:14.932 11:53:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:14.932 11:53:07 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:14.932 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:15.192 11:53:07 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mOU8MnlpZc 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:15.192 11:53:07 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mOU8MnlpZc 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mOU8MnlpZc 00:38:15.192 11:53:07 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.mOU8MnlpZc 00:38:15.192 11:53:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mOU8MnlpZc 00:38:15.192 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mOU8MnlpZc 00:38:15.452 11:53:07 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:15.452 nvme0n1 00:38:15.452 11:53:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:15.452 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:15.712 11:53:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:15.712 11:53:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:15.712 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:15.973 11:53:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:15.973 11:53:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:15.973 11:53:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:15.973 11:53:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:15.973 11:53:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:15.973 11:53:08 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:15.973 11:53:08 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:15.973 11:53:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:15.973 11:53:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:15.973 11:53:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:15.973 11:53:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:15.973 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.233 11:53:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:16.233 11:53:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:16.233 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:16.494 11:53:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:16.494 11:53:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:16.494 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.494 11:53:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:16.494 11:53:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mOU8MnlpZc 00:38:16.494 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mOU8MnlpZc 00:38:16.754 11:53:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.x9NejaSMrK 00:38:16.754 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.x9NejaSMrK 00:38:17.014 11:53:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:17.014 11:53:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:17.014 nvme0n1 00:38:17.276 11:53:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:17.276 11:53:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:17.276 11:53:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:17.276 "subsystems": [ 00:38:17.276 { 00:38:17.276 "subsystem": "keyring", 00:38:17.276 "config": [ 00:38:17.276 { 00:38:17.276 "method": "keyring_file_add_key", 00:38:17.276 "params": { 00:38:17.276 "name": "key0", 00:38:17.276 "path": "/tmp/tmp.mOU8MnlpZc" 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "keyring_file_add_key", 00:38:17.276 "params": { 00:38:17.276 "name": "key1", 00:38:17.276 "path": "/tmp/tmp.x9NejaSMrK" 00:38:17.276 } 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "iobuf", 00:38:17.276 "config": [ 00:38:17.276 { 00:38:17.276 "method": "iobuf_set_options", 00:38:17.276 "params": { 00:38:17.276 "small_pool_count": 8192, 00:38:17.276 "large_pool_count": 1024, 00:38:17.276 "small_bufsize": 8192, 00:38:17.276 "large_bufsize": 135168, 00:38:17.276 "enable_numa": false 00:38:17.276 } 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "sock", 00:38:17.276 "config": [ 00:38:17.276 { 00:38:17.276 "method": "sock_set_default_impl", 00:38:17.276 "params": { 00:38:17.276 "impl_name": "posix" 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "sock_impl_set_options", 00:38:17.276 "params": { 00:38:17.276 "impl_name": "ssl", 00:38:17.276 "recv_buf_size": 4096, 00:38:17.276 "send_buf_size": 4096, 00:38:17.276 "enable_recv_pipe": true, 00:38:17.276 "enable_quickack": false, 00:38:17.276 "enable_placement_id": 0, 00:38:17.276 "enable_zerocopy_send_server": true, 00:38:17.276 "enable_zerocopy_send_client": false, 00:38:17.276 "zerocopy_threshold": 0, 00:38:17.276 "tls_version": 0, 00:38:17.276 "enable_ktls": false 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "sock_impl_set_options", 00:38:17.276 "params": { 00:38:17.276 "impl_name": "posix", 00:38:17.276 "recv_buf_size": 2097152, 00:38:17.276 "send_buf_size": 2097152, 00:38:17.276 "enable_recv_pipe": true, 00:38:17.276 "enable_quickack": false, 00:38:17.276 "enable_placement_id": 0, 00:38:17.276 "enable_zerocopy_send_server": true, 00:38:17.276 "enable_zerocopy_send_client": false, 00:38:17.276 "zerocopy_threshold": 0, 00:38:17.276 "tls_version": 0, 00:38:17.276 "enable_ktls": false 00:38:17.276 } 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "vmd", 00:38:17.276 "config": [] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "accel", 00:38:17.276 "config": [ 00:38:17.276 { 00:38:17.276 "method": "accel_set_options", 00:38:17.276 "params": { 00:38:17.276 "small_cache_size": 128, 00:38:17.276 "large_cache_size": 16, 00:38:17.276 "task_count": 2048, 00:38:17.276 "sequence_count": 2048, 00:38:17.276 "buf_count": 2048 00:38:17.276 } 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "bdev", 00:38:17.276 "config": [ 00:38:17.276 { 00:38:17.276 "method": "bdev_set_options", 00:38:17.276 "params": { 00:38:17.276 "bdev_io_pool_size": 65535, 00:38:17.276 "bdev_io_cache_size": 256, 00:38:17.276 "bdev_auto_examine": true, 00:38:17.276 "iobuf_small_cache_size": 128, 00:38:17.276 "iobuf_large_cache_size": 16 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_raid_set_options", 00:38:17.276 "params": { 00:38:17.276 "process_window_size_kb": 1024, 00:38:17.276 "process_max_bandwidth_mb_sec": 0 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_iscsi_set_options", 00:38:17.276 "params": { 00:38:17.276 "timeout_sec": 30 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_nvme_set_options", 00:38:17.276 "params": { 00:38:17.276 "action_on_timeout": "none", 00:38:17.276 "timeout_us": 0, 00:38:17.276 "timeout_admin_us": 0, 00:38:17.276 "keep_alive_timeout_ms": 10000, 00:38:17.276 "arbitration_burst": 0, 00:38:17.276 "low_priority_weight": 0, 00:38:17.276 "medium_priority_weight": 0, 00:38:17.276 "high_priority_weight": 0, 00:38:17.276 "nvme_adminq_poll_period_us": 10000, 00:38:17.276 "nvme_ioq_poll_period_us": 0, 00:38:17.276 "io_queue_requests": 512, 00:38:17.276 "delay_cmd_submit": true, 00:38:17.276 "transport_retry_count": 4, 00:38:17.276 "bdev_retry_count": 3, 00:38:17.276 "transport_ack_timeout": 0, 00:38:17.276 "ctrlr_loss_timeout_sec": 0, 00:38:17.276 "reconnect_delay_sec": 0, 00:38:17.276 "fast_io_fail_timeout_sec": 0, 00:38:17.276 "disable_auto_failback": false, 00:38:17.276 "generate_uuids": false, 00:38:17.276 "transport_tos": 0, 00:38:17.276 "nvme_error_stat": false, 00:38:17.276 "rdma_srq_size": 0, 00:38:17.276 "io_path_stat": false, 00:38:17.276 "allow_accel_sequence": false, 00:38:17.276 "rdma_max_cq_size": 0, 00:38:17.276 "rdma_cm_event_timeout_ms": 0, 00:38:17.276 "dhchap_digests": [ 00:38:17.276 "sha256", 00:38:17.276 "sha384", 00:38:17.276 "sha512" 00:38:17.276 ], 00:38:17.276 "dhchap_dhgroups": [ 00:38:17.276 "null", 00:38:17.276 "ffdhe2048", 00:38:17.276 "ffdhe3072", 00:38:17.276 "ffdhe4096", 00:38:17.276 "ffdhe6144", 00:38:17.276 "ffdhe8192" 00:38:17.276 ] 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_nvme_attach_controller", 00:38:17.276 "params": { 00:38:17.276 "name": "nvme0", 00:38:17.276 "trtype": "TCP", 00:38:17.276 "adrfam": "IPv4", 00:38:17.276 "traddr": "127.0.0.1", 00:38:17.276 "trsvcid": "4420", 00:38:17.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.276 "prchk_reftag": false, 00:38:17.276 "prchk_guard": false, 00:38:17.276 "ctrlr_loss_timeout_sec": 0, 00:38:17.276 "reconnect_delay_sec": 0, 00:38:17.276 "fast_io_fail_timeout_sec": 0, 00:38:17.276 "psk": "key0", 00:38:17.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:17.276 "hdgst": false, 00:38:17.276 "ddgst": false, 00:38:17.276 "multipath": "multipath" 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_nvme_set_hotplug", 00:38:17.276 "params": { 00:38:17.276 "period_us": 100000, 00:38:17.276 "enable": false 00:38:17.276 } 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "method": "bdev_wait_for_examine" 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }, 00:38:17.276 { 00:38:17.276 "subsystem": "nbd", 00:38:17.276 "config": [] 00:38:17.276 } 00:38:17.276 ] 00:38:17.276 }' 00:38:17.276 11:53:09 keyring_file -- keyring/file.sh@115 -- # killprocess 3862358 00:38:17.276 11:53:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3862358 ']' 00:38:17.276 11:53:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3862358 00:38:17.276 11:53:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:17.276 11:53:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.276 11:53:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3862358 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3862358' 00:38:17.538 killing process with pid 3862358 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@973 -- # kill 3862358 00:38:17.538 Received shutdown signal, test time was about 1.000000 seconds 00:38:17.538 00:38:17.538 Latency(us) 00:38:17.538 [2024-12-09T10:53:09.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.538 [2024-12-09T10:53:09.700Z] =================================================================================================================== 00:38:17.538 [2024-12-09T10:53:09.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@978 -- # wait 3862358 00:38:17.538 11:53:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=3863939 00:38:17.538 11:53:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3863939 /var/tmp/bperf.sock 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3863939 ']' 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.538 11:53:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:17.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.538 11:53:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.538 11:53:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:17.538 "subsystems": [ 00:38:17.538 { 00:38:17.538 "subsystem": "keyring", 00:38:17.538 "config": [ 00:38:17.538 { 00:38:17.538 "method": "keyring_file_add_key", 00:38:17.538 "params": { 00:38:17.538 "name": "key0", 00:38:17.538 "path": "/tmp/tmp.mOU8MnlpZc" 00:38:17.538 } 00:38:17.538 }, 00:38:17.538 { 00:38:17.538 "method": "keyring_file_add_key", 00:38:17.538 "params": { 00:38:17.538 "name": "key1", 00:38:17.538 "path": "/tmp/tmp.x9NejaSMrK" 00:38:17.538 } 00:38:17.538 } 00:38:17.538 ] 00:38:17.538 }, 00:38:17.538 { 00:38:17.538 "subsystem": "iobuf", 00:38:17.538 "config": [ 00:38:17.538 { 00:38:17.538 "method": "iobuf_set_options", 00:38:17.538 "params": { 00:38:17.538 "small_pool_count": 8192, 00:38:17.538 "large_pool_count": 1024, 00:38:17.538 "small_bufsize": 8192, 00:38:17.538 "large_bufsize": 135168, 00:38:17.538 "enable_numa": false 00:38:17.538 } 00:38:17.539 } 00:38:17.539 ] 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "subsystem": "sock", 00:38:17.539 "config": [ 00:38:17.539 { 00:38:17.539 "method": "sock_set_default_impl", 00:38:17.539 "params": { 00:38:17.539 "impl_name": "posix" 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "sock_impl_set_options", 00:38:17.539 "params": { 00:38:17.539 "impl_name": "ssl", 00:38:17.539 "recv_buf_size": 4096, 00:38:17.539 "send_buf_size": 4096, 00:38:17.539 "enable_recv_pipe": true, 00:38:17.539 "enable_quickack": false, 00:38:17.539 "enable_placement_id": 0, 00:38:17.539 "enable_zerocopy_send_server": true, 00:38:17.539 "enable_zerocopy_send_client": false, 00:38:17.539 "zerocopy_threshold": 0, 00:38:17.539 "tls_version": 0, 00:38:17.539 "enable_ktls": false 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "sock_impl_set_options", 00:38:17.539 "params": { 00:38:17.539 "impl_name": "posix", 00:38:17.539 "recv_buf_size": 2097152, 00:38:17.539 "send_buf_size": 2097152, 00:38:17.539 "enable_recv_pipe": true, 00:38:17.539 "enable_quickack": false, 00:38:17.539 "enable_placement_id": 0, 00:38:17.539 "enable_zerocopy_send_server": true, 00:38:17.539 "enable_zerocopy_send_client": false, 00:38:17.539 "zerocopy_threshold": 0, 00:38:17.539 "tls_version": 0, 00:38:17.539 "enable_ktls": false 00:38:17.539 } 00:38:17.539 } 00:38:17.539 ] 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "subsystem": "vmd", 00:38:17.539 "config": [] 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "subsystem": "accel", 00:38:17.539 "config": [ 00:38:17.539 { 00:38:17.539 "method": "accel_set_options", 00:38:17.539 "params": { 00:38:17.539 "small_cache_size": 128, 00:38:17.539 "large_cache_size": 16, 00:38:17.539 "task_count": 2048, 00:38:17.539 "sequence_count": 2048, 00:38:17.539 "buf_count": 2048 00:38:17.539 } 00:38:17.539 } 00:38:17.539 ] 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "subsystem": "bdev", 00:38:17.539 "config": [ 00:38:17.539 { 00:38:17.539 "method": "bdev_set_options", 00:38:17.539 "params": { 00:38:17.539 "bdev_io_pool_size": 65535, 00:38:17.539 "bdev_io_cache_size": 256, 00:38:17.539 "bdev_auto_examine": true, 00:38:17.539 "iobuf_small_cache_size": 128, 00:38:17.539 "iobuf_large_cache_size": 16 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_raid_set_options", 00:38:17.539 "params": { 00:38:17.539 "process_window_size_kb": 1024, 00:38:17.539 "process_max_bandwidth_mb_sec": 0 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_iscsi_set_options", 00:38:17.539 "params": { 00:38:17.539 "timeout_sec": 30 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_nvme_set_options", 00:38:17.539 "params": { 00:38:17.539 "action_on_timeout": "none", 00:38:17.539 "timeout_us": 0, 00:38:17.539 "timeout_admin_us": 0, 00:38:17.539 "keep_alive_timeout_ms": 10000, 00:38:17.539 "arbitration_burst": 0, 00:38:17.539 "low_priority_weight": 0, 00:38:17.539 "medium_priority_weight": 0, 00:38:17.539 "high_priority_weight": 0, 00:38:17.539 "nvme_adminq_poll_period_us": 10000, 00:38:17.539 "nvme_ioq_poll_period_us": 0, 00:38:17.539 "io_queue_requests": 512, 00:38:17.539 "delay_cmd_submit": true, 00:38:17.539 "transport_retry_count": 4, 00:38:17.539 "bdev_retry_count": 3, 00:38:17.539 "transport_ack_timeout": 0, 00:38:17.539 "ctrlr_loss_timeout_sec": 0, 00:38:17.539 "reconnect_delay_sec": 0, 00:38:17.539 "fast_io_fail_timeout_sec": 0, 00:38:17.539 "disable_auto_failback": false, 00:38:17.539 "generate_uuids": false, 00:38:17.539 "transport_tos": 0, 00:38:17.539 "nvme_error_stat": false, 00:38:17.539 "rdma_srq_size": 0, 00:38:17.539 "io_path_stat": false, 00:38:17.539 "allow_accel_sequence": false, 00:38:17.539 "rdma_max_cq_size": 0, 00:38:17.539 "rdma_cm_event_timeout_ms": 0, 00:38:17.539 "dhchap_digests": [ 00:38:17.539 "sha256", 00:38:17.539 "sha384", 00:38:17.539 "sha512" 00:38:17.539 ], 00:38:17.539 "dhchap_dhgroups": [ 00:38:17.539 "null", 00:38:17.539 "ffdhe2048", 00:38:17.539 "ffdhe3072", 00:38:17.539 "ffdhe4096", 00:38:17.539 "ffdhe6144", 00:38:17.539 "ffdhe8192" 00:38:17.539 ] 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_nvme_attach_controller", 00:38:17.539 "params": { 00:38:17.539 "name": "nvme0", 00:38:17.539 "trtype": "TCP", 00:38:17.539 "adrfam": "IPv4", 00:38:17.539 "traddr": "127.0.0.1", 00:38:17.539 "trsvcid": "4420", 00:38:17.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.539 "prchk_reftag": false, 00:38:17.539 "prchk_guard": false, 00:38:17.539 "ctrlr_loss_timeout_sec": 0, 00:38:17.539 "reconnect_delay_sec": 0, 00:38:17.539 "fast_io_fail_timeout_sec": 0, 00:38:17.539 "psk": "key0", 00:38:17.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:17.539 "hdgst": false, 00:38:17.539 "ddgst": false, 00:38:17.539 "multipath": "multipath" 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_nvme_set_hotplug", 00:38:17.539 "params": { 00:38:17.539 "period_us": 100000, 00:38:17.539 "enable": false 00:38:17.539 } 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "method": "bdev_wait_for_examine" 00:38:17.539 } 00:38:17.539 ] 00:38:17.539 }, 00:38:17.539 { 00:38:17.539 "subsystem": "nbd", 00:38:17.539 "config": [] 00:38:17.539 } 00:38:17.539 ] 00:38:17.539 }' 00:38:17.539 [2024-12-09 11:53:09.615336] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:38:17.539 [2024-12-09 11:53:09.615394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863939 ] 00:38:17.801 [2024-12-09 11:53:09.698866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.801 [2024-12-09 11:53:09.728408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:17.801 [2024-12-09 11:53:09.872665] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:18.373 11:53:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.373 11:53:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:18.373 11:53:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:18.373 11:53:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:18.373 11:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.634 11:53:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:18.634 11:53:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:18.634 11:53:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:18.634 11:53:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:18.634 11:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.895 11:53:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:18.895 11:53:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:18.895 11:53:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:18.895 11:53:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:19.155 11:53:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:19.155 11:53:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:19.155 11:53:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mOU8MnlpZc /tmp/tmp.x9NejaSMrK 00:38:19.155 11:53:11 keyring_file -- keyring/file.sh@20 -- # killprocess 3863939 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3863939 ']' 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3863939 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3863939 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3863939' 00:38:19.155 killing process with pid 3863939 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@973 -- # kill 3863939 00:38:19.155 Received shutdown signal, test time was about 1.000000 seconds 00:38:19.155 00:38:19.155 Latency(us) 00:38:19.155 [2024-12-09T10:53:11.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.155 [2024-12-09T10:53:11.317Z] =================================================================================================================== 00:38:19.155 [2024-12-09T10:53:11.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@978 -- # wait 3863939 00:38:19.155 11:53:11 keyring_file -- keyring/file.sh@21 -- # killprocess 3862275 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3862275 ']' 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3862275 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.155 11:53:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3862275 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3862275' 00:38:19.416 killing process with pid 3862275 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@973 -- # kill 3862275 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@978 -- # wait 3862275 00:38:19.416 00:38:19.416 real 0m11.040s 00:38:19.416 user 0m26.600s 00:38:19.416 sys 0m2.577s 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.416 11:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:19.416 ************************************ 00:38:19.416 END TEST keyring_file 00:38:19.416 ************************************ 00:38:19.677 11:53:11 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:19.677 11:53:11 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:19.677 11:53:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:19.677 11:53:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.677 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:38:19.677 ************************************ 00:38:19.677 START TEST keyring_linux 00:38:19.677 ************************************ 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:19.677 Joined session keyring: 331557622 00:38:19.677 * Looking for test storage... 00:38:19.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.677 11:53:11 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:19.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.677 --rc genhtml_branch_coverage=1 00:38:19.677 --rc genhtml_function_coverage=1 00:38:19.677 --rc genhtml_legend=1 00:38:19.677 --rc geninfo_all_blocks=1 00:38:19.677 --rc geninfo_unexecuted_blocks=1 00:38:19.677 00:38:19.677 ' 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:19.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.677 --rc genhtml_branch_coverage=1 00:38:19.677 --rc genhtml_function_coverage=1 00:38:19.677 --rc genhtml_legend=1 00:38:19.677 --rc geninfo_all_blocks=1 00:38:19.677 --rc geninfo_unexecuted_blocks=1 00:38:19.677 00:38:19.677 ' 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:19.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.677 --rc genhtml_branch_coverage=1 00:38:19.677 --rc genhtml_function_coverage=1 00:38:19.677 --rc genhtml_legend=1 00:38:19.677 --rc geninfo_all_blocks=1 00:38:19.677 --rc geninfo_unexecuted_blocks=1 00:38:19.677 00:38:19.677 ' 00:38:19.677 11:53:11 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:19.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.677 --rc genhtml_branch_coverage=1 00:38:19.677 --rc genhtml_function_coverage=1 00:38:19.677 --rc genhtml_legend=1 00:38:19.677 --rc geninfo_all_blocks=1 00:38:19.677 --rc geninfo_unexecuted_blocks=1 00:38:19.677 00:38:19.677 ' 00:38:19.677 11:53:11 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:19.677 11:53:11 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.677 11:53:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.938 11:53:11 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.938 11:53:11 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.938 11:53:11 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.938 11:53:11 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.938 11:53:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.938 11:53:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.938 11:53:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.938 11:53:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:19.938 11:53:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:19.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:19.938 11:53:11 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:19.938 11:53:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:19.938 11:53:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:19.938 11:53:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:19.938 11:53:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:19.939 /tmp/:spdk-test:key0 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:19.939 11:53:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:19.939 11:53:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:19.939 /tmp/:spdk-test:key1 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3864594 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3864594 00:38:19.939 11:53:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3864594 ']' 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:19.939 11:53:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:19.939 [2024-12-09 11:53:12.024338] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:38:19.939 [2024-12-09 11:53:12.024395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864594 ] 00:38:19.939 [2024-12-09 11:53:12.095098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.198 [2024-12-09 11:53:12.131685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:20.769 [2024-12-09 11:53:12.792801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.769 null0 00:38:20.769 [2024-12-09 11:53:12.824844] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:20.769 [2024-12-09 11:53:12.825252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:20.769 929057471 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:20.769 939291900 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3864611 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3864611 /var/tmp/bperf.sock 00:38:20.769 11:53:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3864611 ']' 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.769 11:53:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:20.769 [2024-12-09 11:53:12.904154] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization... 00:38:20.769 [2024-12-09 11:53:12.904209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864611 ] 00:38:21.029 [2024-12-09 11:53:12.987363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.029 [2024-12-09 11:53:13.017447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.598 11:53:13 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.598 11:53:13 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:21.598 11:53:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:21.598 11:53:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:21.858 11:53:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:21.858 11:53:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:22.118 11:53:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:22.118 11:53:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:22.118 [2024-12-09 11:53:14.202939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:22.118 nvme0n1 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:22.378 11:53:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:22.378 11:53:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:22.378 11:53:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.378 11:53:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:22.378 11:53:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@25 -- # sn=929057471 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 929057471 == \9\2\9\0\5\7\4\7\1 ]] 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 929057471 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:22.639 11:53:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:22.639 Running I/O for 1 seconds... 00:38:23.840 16186.00 IOPS, 63.23 MiB/s 00:38:23.840 Latency(us) 00:38:23.840 [2024-12-09T10:53:16.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.840 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:23.840 nvme0n1 : 1.01 16187.08 63.23 0.00 0.00 7874.28 5870.93 12834.13 00:38:23.840 [2024-12-09T10:53:16.002Z] =================================================================================================================== 00:38:23.840 [2024-12-09T10:53:16.002Z] Total : 16187.08 63.23 0.00 0.00 7874.28 5870.93 12834.13 00:38:23.840 { 00:38:23.840 "results": [ 00:38:23.840 { 00:38:23.840 "job": "nvme0n1", 00:38:23.840 "core_mask": "0x2", 00:38:23.840 "workload": "randread", 00:38:23.840 "status": "finished", 00:38:23.840 "queue_depth": 128, 00:38:23.840 "io_size": 4096, 00:38:23.840 "runtime": 1.007841, 00:38:23.840 "iops": 16187.07712823749, 00:38:23.840 "mibps": 63.230770032177695, 00:38:23.840 "io_failed": 0, 00:38:23.841 "io_timeout": 0, 00:38:23.841 "avg_latency_us": 7874.278380123412, 00:38:23.841 "min_latency_us": 5870.933333333333, 00:38:23.841 "max_latency_us": 12834.133333333333 00:38:23.841 } 00:38:23.841 ], 00:38:23.841 "core_count": 1 00:38:23.841 } 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:23.841 11:53:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:23.841 11:53:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:23.841 11:53:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.102 11:53:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:24.102 11:53:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:24.102 11:53:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:24.102 11:53:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.102 11:53:16 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:24.102 11:53:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:24.362 [2024-12-09 11:53:16.276325] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:24.362 [2024-12-09 11:53:16.277078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac3e0 (107): Transport endpoint is not connected 00:38:24.362 [2024-12-09 11:53:16.278075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac3e0 (9): Bad file descriptor 00:38:24.362 [2024-12-09 11:53:16.279077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:24.362 [2024-12-09 11:53:16.279088] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:24.362 [2024-12-09 11:53:16.279094] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:24.362 [2024-12-09 11:53:16.279100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:24.362 request: 00:38:24.362 { 00:38:24.362 "name": "nvme0", 00:38:24.362 "trtype": "tcp", 00:38:24.362 "traddr": "127.0.0.1", 00:38:24.362 "adrfam": "ipv4", 00:38:24.363 "trsvcid": "4420", 00:38:24.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.363 "prchk_reftag": false, 00:38:24.363 "prchk_guard": false, 00:38:24.363 "hdgst": false, 00:38:24.363 "ddgst": false, 00:38:24.363 "psk": ":spdk-test:key1", 00:38:24.363 "allow_unrecognized_csi": false, 00:38:24.363 "method": "bdev_nvme_attach_controller", 00:38:24.363 "req_id": 1 00:38:24.363 } 00:38:24.363 Got JSON-RPC error response 00:38:24.363 response: 00:38:24.363 { 00:38:24.363 "code": -5, 00:38:24.363 "message": "Input/output error" 00:38:24.363 } 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@33 -- # sn=929057471 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 929057471 00:38:24.363 1 links removed 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@33 -- # sn=939291900 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 939291900 00:38:24.363 1 links removed 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3864611 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3864611 ']' 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3864611 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864611 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864611' 00:38:24.363 killing process with pid 3864611 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 3864611 00:38:24.363 Received shutdown signal, test time was about 1.000000 seconds 00:38:24.363 00:38:24.363 Latency(us) 00:38:24.363 [2024-12-09T10:53:16.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.363 [2024-12-09T10:53:16.525Z] =================================================================================================================== 00:38:24.363 [2024-12-09T10:53:16.525Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 3864611 00:38:24.363 11:53:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3864594 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3864594 ']' 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3864594 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.363 11:53:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3864594 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3864594' 00:38:24.622 killing process with pid 3864594 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 3864594 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 3864594 00:38:24.622 00:38:24.622 real 0m5.128s 00:38:24.622 user 0m9.478s 00:38:24.622 sys 0m1.365s 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.622 11:53:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:24.622 ************************************ 00:38:24.622 END TEST keyring_linux 00:38:24.622 ************************************ 00:38:24.882 11:53:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:24.882 11:53:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:24.882 11:53:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:24.882 11:53:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:24.882 11:53:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:24.882 11:53:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:24.882 11:53:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:24.882 11:53:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.882 11:53:16 -- common/autotest_common.sh@10 -- # set +x 00:38:24.882 11:53:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:24.882 11:53:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:24.882 11:53:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:24.882 11:53:16 -- common/autotest_common.sh@10 -- # set +x 00:38:33.032 INFO: APP EXITING 00:38:33.032 INFO: killing all VMs 00:38:33.032 INFO: killing vhost app 00:38:33.032 INFO: EXIT DONE 00:38:35.584 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:35.584 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:35.584 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:35.844 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:35.844 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:35.844 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:35.844 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:40.050 Cleaning 00:38:40.050 Removing: /var/run/dpdk/spdk0/config 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:40.050 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:40.050 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:40.050 Removing: /var/run/dpdk/spdk1/config 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:40.050 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:40.050 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:40.050 Removing: /var/run/dpdk/spdk2/config 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:40.050 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:40.050 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:40.050 Removing: /var/run/dpdk/spdk3/config 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:40.050 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:40.050 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:40.050 Removing: /var/run/dpdk/spdk4/config 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:40.050 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:40.050 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:40.050 Removing: /dev/shm/bdev_svc_trace.1 00:38:40.050 Removing: /dev/shm/nvmf_trace.0 00:38:40.050 Removing: /dev/shm/spdk_tgt_trace.pid3281339 00:38:40.050 Removing: /var/run/dpdk/spdk0 00:38:40.050 Removing: /var/run/dpdk/spdk1 00:38:40.050 Removing: /var/run/dpdk/spdk2 00:38:40.050 Removing: /var/run/dpdk/spdk3 00:38:40.050 Removing: /var/run/dpdk/spdk4 00:38:40.050 Removing: /var/run/dpdk/spdk_pid3279846 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3281339 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3282187 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3283223 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3283548 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3284635 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3284822 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3285111 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3286243 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3286991 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3287362 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3287720 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3288092 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3288398 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3288673 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3289021 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3289414 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3290473 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3293740 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3294107 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3294470 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3294801 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3295179 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3295205 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3295698 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3295898 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3296261 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3296398 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3296637 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3296859 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3297421 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3297664 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3297950 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3302644 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3307983 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3320605 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3321388 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3326743 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3327092 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3332405 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3339530 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3342784 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3355436 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3366535 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3368736 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3370193 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3391380 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3396239 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3453515 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3459953 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3466963 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3475105 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3475157 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3476157 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3477267 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3478376 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3479396 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3479401 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3479737 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3479748 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3479779 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3480848 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3481863 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3482934 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3483577 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3483716 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3483983 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3485360 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3486634 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3496963 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3533281 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3538749 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3540748 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3542761 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3542983 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3543114 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3543211 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3543841 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3546025 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3547110 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3547637 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3550340 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3550723 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3551525 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3556595 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3563995 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3563997 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3563998 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3568722 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3578995 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3583871 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3591413 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3592913 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3594587 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3596280 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3601717 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3607233 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3612337 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3622120 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3622240 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3627369 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3627554 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3627888 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3628441 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3628556 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3634000 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3634825 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3640095 00:38:40.051 Removing: /var/run/dpdk/spdk_pid3643415 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3649859 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3656800 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3666835 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3676129 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3676131 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3699857 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3700675 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3701462 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3702234 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3703292 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3703970 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3704661 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3705343 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3710483 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3710793 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3718212 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3718392 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3725214 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3730754 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3742527 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3743196 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3748326 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3748693 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3753779 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3760881 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3763866 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3776043 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3787344 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3789347 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3790435 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3810099 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3815064 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3818370 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3825554 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3825593 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3831924 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3834585 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3836864 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3838322 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3840727 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3842042 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3852271 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3852792 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3853433 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3856393 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3856934 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3857413 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3862275 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3862358 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3863939 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3864594 00:38:40.312 Removing: /var/run/dpdk/spdk_pid3864611 00:38:40.312 Clean 00:38:40.573 11:53:32 -- common/autotest_common.sh@1453 -- # return 0 00:38:40.573 11:53:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:40.573 11:53:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:40.573 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:38:40.573 11:53:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:40.573 11:53:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:40.573 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:38:40.573 11:53:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:40.573 11:53:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:40.573 11:53:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:40.573 11:53:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:40.573 11:53:32 -- spdk/autotest.sh@398 -- # hostname 00:38:40.573 11:53:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:40.833 geninfo: WARNING: invalid characters removed from testname! 00:39:07.408 11:53:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:09.318 11:54:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:10.697 11:54:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:12.606 11:54:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:13.987 11:54:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:15.894 11:54:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:17.276 11:54:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:17.276 11:54:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:17.276 11:54:09 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:17.276 11:54:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:17.276 11:54:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:17.276 11:54:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:17.276 + [[ -n 3194723 ]] 00:39:17.276 + sudo kill 3194723 00:39:17.548 [Pipeline] } 00:39:17.563 [Pipeline] // stage 00:39:17.568 [Pipeline] } 00:39:17.583 [Pipeline] // timeout 00:39:17.588 [Pipeline] } 00:39:17.602 [Pipeline] // catchError 00:39:17.607 [Pipeline] } 00:39:17.622 [Pipeline] // wrap 00:39:17.628 [Pipeline] } 00:39:17.640 [Pipeline] // catchError 00:39:17.650 [Pipeline] stage 00:39:17.653 [Pipeline] { (Epilogue) 00:39:17.666 [Pipeline] catchError 00:39:17.668 [Pipeline] { 00:39:17.682 [Pipeline] echo 00:39:17.683 Cleanup processes 00:39:17.689 [Pipeline] sh 00:39:17.979 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:17.979 3878170 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:17.993 [Pipeline] sh 00:39:18.281 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:18.281 ++ awk '{print $1}' 00:39:18.281 ++ grep -v 'sudo pgrep' 00:39:18.281 + sudo kill -9 00:39:18.282 + true 00:39:18.295 [Pipeline] sh 00:39:18.592 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:31.082 [Pipeline] sh 00:39:31.371 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:31.371 Artifacts sizes are good 00:39:31.387 [Pipeline] archiveArtifacts 00:39:31.395 Archiving artifacts 00:39:31.530 [Pipeline] sh 00:39:31.818 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:31.833 [Pipeline] cleanWs 00:39:31.843 [WS-CLEANUP] Deleting project workspace... 00:39:31.843 [WS-CLEANUP] Deferred wipeout is used... 00:39:31.850 [WS-CLEANUP] done 00:39:31.852 [Pipeline] } 00:39:31.869 [Pipeline] // catchError 00:39:31.881 [Pipeline] sh 00:39:32.169 + logger -p user.info -t JENKINS-CI 00:39:32.179 [Pipeline] } 00:39:32.191 [Pipeline] // stage 00:39:32.196 [Pipeline] } 00:39:32.210 [Pipeline] // node 00:39:32.215 [Pipeline] End of Pipeline 00:39:32.251 Finished: SUCCESS